00:00:00.000 Started by upstream project "autotest-per-patch" build number 132402 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.047 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.048 The recommended git tool is: git 00:00:00.049 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.081 Fetching changes from the remote Git repository 00:00:00.083 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.130 Using shallow fetch with depth 1 00:00:00.130 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.130 > git --version # timeout=10 00:00:00.198 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.925 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.938 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.951 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.951 > git config core.sparsecheckout # timeout=10 00:00:03.961 > git read-tree -mu HEAD # timeout=10 00:00:03.977 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.999 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.999 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.087 [Pipeline] Start of Pipeline 00:00:04.100 [Pipeline] library 00:00:04.101 Loading library shm_lib@master 00:00:04.102 Library shm_lib@master is cached. Copying from home. 00:00:04.122 [Pipeline] node 00:00:04.129 Running on VM-host-SM0 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.131 [Pipeline] { 00:00:04.142 [Pipeline] catchError 00:00:04.144 [Pipeline] { 00:00:04.158 [Pipeline] wrap 00:00:04.168 [Pipeline] { 00:00:04.177 [Pipeline] stage 00:00:04.179 [Pipeline] { (Prologue) 00:00:04.195 [Pipeline] echo 00:00:04.196 Node: VM-host-SM0 00:00:04.201 [Pipeline] cleanWs 00:00:04.209 [WS-CLEANUP] Deleting project workspace... 00:00:04.209 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.214 [WS-CLEANUP] done 00:00:04.408 [Pipeline] setCustomBuildProperty 00:00:04.491 [Pipeline] httpRequest 00:00:04.959 [Pipeline] echo 00:00:04.960 Sorcerer 10.211.164.20 is alive 00:00:04.971 [Pipeline] retry 00:00:04.973 [Pipeline] { 00:00:04.987 [Pipeline] httpRequest 00:00:04.992 HttpMethod: GET 00:00:04.993 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.994 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.995 Response Code: HTTP/1.1 200 OK 00:00:04.995 Success: Status code 200 is in the accepted range: 200,404 00:00:04.996 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.420 [Pipeline] } 00:00:05.438 [Pipeline] // retry 00:00:05.446 [Pipeline] sh 00:00:05.733 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.750 [Pipeline] httpRequest 00:00:06.295 [Pipeline] echo 00:00:06.297 Sorcerer 10.211.164.20 is alive 00:00:06.305 [Pipeline] retry 00:00:06.307 [Pipeline] { 00:00:06.320 [Pipeline] httpRequest 00:00:06.325 HttpMethod: GET 00:00:06.325 URL: http://10.211.164.20/packages/spdk_23429eed711e5a65afee3f792d83aeff4d98c9a3.tar.gz 00:00:06.326 Sending request to url: http://10.211.164.20/packages/spdk_23429eed711e5a65afee3f792d83aeff4d98c9a3.tar.gz 00:00:06.332 Response Code: HTTP/1.1 200 OK 00:00:06.333 Success: Status code 200 is in the accepted range: 200,404 00:00:06.333 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_23429eed711e5a65afee3f792d83aeff4d98c9a3.tar.gz 00:01:06.267 [Pipeline] } 00:01:06.285 [Pipeline] // retry 00:01:06.293 [Pipeline] sh 00:01:06.573 + tar --no-same-owner -xf spdk_23429eed711e5a65afee3f792d83aeff4d98c9a3.tar.gz 00:01:09.869 [Pipeline] sh 00:01:10.170 + git -C spdk log --oneline -n5 00:01:10.170 23429eed7 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:01:10.170 09ac735c8 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:01:10.170 c1691a126 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:10.170 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:01:10.170 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:10.189 [Pipeline] writeFile 00:01:10.205 [Pipeline] sh 00:01:10.487 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.499 [Pipeline] sh 00:01:10.782 + cat autorun-spdk.conf 00:01:10.782 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.782 SPDK_RUN_ASAN=1 00:01:10.782 SPDK_RUN_UBSAN=1 00:01:10.782 SPDK_TEST_RAID=1 00:01:10.782 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.789 RUN_NIGHTLY=0 00:01:10.791 [Pipeline] } 00:01:10.807 [Pipeline] // stage 00:01:10.823 [Pipeline] stage 00:01:10.826 [Pipeline] { (Run VM) 00:01:10.843 [Pipeline] sh 00:01:11.130 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:11.131 + echo 'Start stage prepare_nvme.sh' 00:01:11.131 Start stage prepare_nvme.sh 00:01:11.131 + [[ -n 1 ]] 00:01:11.131 + disk_prefix=ex1 00:01:11.131 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:11.131 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:11.131 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:11.131 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.131 ++ SPDK_RUN_ASAN=1 00:01:11.131 ++ SPDK_RUN_UBSAN=1 00:01:11.131 ++ SPDK_TEST_RAID=1 00:01:11.131 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.131 ++ RUN_NIGHTLY=0 00:01:11.131 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:11.131 + nvme_files=() 00:01:11.131 + declare -A nvme_files 00:01:11.131 + backend_dir=/var/lib/libvirt/images/backends 00:01:11.131 + nvme_files['nvme.img']=5G 00:01:11.131 + nvme_files['nvme-cmb.img']=5G 00:01:11.131 + nvme_files['nvme-multi0.img']=4G 00:01:11.131 + nvme_files['nvme-multi1.img']=4G 00:01:11.131 + nvme_files['nvme-multi2.img']=4G 00:01:11.131 + nvme_files['nvme-openstack.img']=8G 00:01:11.131 + nvme_files['nvme-zns.img']=5G 00:01:11.131 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:11.131 + (( SPDK_TEST_FTL == 1 )) 00:01:11.131 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:11.131 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:11.131 + for nvme in "${!nvme_files[@]}" 00:01:11.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:11.131 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.131 + for nvme in "${!nvme_files[@]}" 00:01:11.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:11.131 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.131 + for nvme in "${!nvme_files[@]}" 00:01:11.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:11.131 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:11.131 + for nvme in "${!nvme_files[@]}" 00:01:11.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:11.131 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.131 + for nvme in "${!nvme_files[@]}" 00:01:11.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:11.131 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.131 + for nvme in "${!nvme_files[@]}" 00:01:11.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:11.131 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.131 + for nvme in "${!nvme_files[@]}" 00:01:11.131 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:11.389 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.389 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:11.389 + echo 'End stage prepare_nvme.sh' 00:01:11.389 End stage prepare_nvme.sh 00:01:11.401 [Pipeline] sh 00:01:11.680 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:11.680 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:11.680 00:01:11.680 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:11.680 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:11.680 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:11.680 HELP=0 00:01:11.680 DRY_RUN=0 00:01:11.680 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:11.680 NVME_DISKS_TYPE=nvme,nvme, 00:01:11.680 NVME_AUTO_CREATE=0 00:01:11.680 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:11.680 NVME_CMB=,, 00:01:11.680 NVME_PMR=,, 00:01:11.680 NVME_ZNS=,, 00:01:11.680 NVME_MS=,, 00:01:11.680 NVME_FDP=,, 00:01:11.680 SPDK_VAGRANT_DISTRO=fedora39 00:01:11.680 SPDK_VAGRANT_VMCPU=10 00:01:11.680 SPDK_VAGRANT_VMRAM=12288 00:01:11.680 SPDK_VAGRANT_PROVIDER=libvirt 00:01:11.680 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:11.680 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:11.680 SPDK_OPENSTACK_NETWORK=0 00:01:11.680 VAGRANT_PACKAGE_BOX=0 00:01:11.680 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:11.680 FORCE_DISTRO=true 00:01:11.680 VAGRANT_BOX_VERSION= 00:01:11.681 EXTRA_VAGRANTFILES= 00:01:11.681 NIC_MODEL=e1000 00:01:11.681 00:01:11.681 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:11.681 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:15.010 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.576 ==> default: Creating image (snapshot of base box volume). 00:01:15.834 ==> default: Creating domain with the following settings... 00:01:15.834 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732112296_887c642a46f2dad45d69 00:01:15.834 ==> default: -- Domain type: kvm 00:01:15.834 ==> default: -- Cpus: 10 00:01:15.834 ==> default: -- Feature: acpi 00:01:15.834 ==> default: -- Feature: apic 00:01:15.834 ==> default: -- Feature: pae 00:01:15.835 ==> default: -- Memory: 12288M 00:01:15.835 ==> default: -- Memory Backing: hugepages: 00:01:15.835 ==> default: -- Management MAC: 00:01:15.835 ==> default: -- Loader: 00:01:15.835 ==> default: -- Nvram: 00:01:15.835 ==> default: -- Base box: spdk/fedora39 00:01:15.835 ==> default: -- Storage pool: default 00:01:15.835 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732112296_887c642a46f2dad45d69.img (20G) 00:01:15.835 ==> default: -- Volume Cache: default 00:01:15.835 ==> default: -- Kernel: 00:01:15.835 ==> default: -- Initrd: 00:01:15.835 ==> default: -- Graphics Type: vnc 00:01:15.835 ==> default: -- Graphics Port: -1 00:01:15.835 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.835 ==> default: -- Graphics Password: Not defined 00:01:15.835 ==> default: -- Video Type: cirrus 00:01:15.835 ==> default: -- Video VRAM: 9216 00:01:15.835 ==> default: -- Sound Type: 00:01:15.835 ==> default: -- Keymap: en-us 00:01:15.835 ==> default: -- TPM Path: 00:01:15.835 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.835 ==> default: -- Command line args: 00:01:15.835 ==> default: -> value=-device, 00:01:15.835 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:15.835 ==> default: -> value=-drive, 00:01:15.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.835 ==> default: -> value=-device, 00:01:15.835 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.835 ==> default: -> value=-device, 00:01:15.835 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:15.835 ==> default: -> value=-drive, 00:01:15.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:15.835 ==> default: -> value=-device, 00:01:15.835 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.835 ==> default: -> value=-drive, 00:01:15.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:15.835 ==> default: -> value=-device, 00:01:15.835 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.835 ==> default: -> value=-drive, 00:01:15.835 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:15.835 ==> default: -> value=-device, 00:01:15.835 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.835 ==> default: Creating shared folders metadata... 00:01:15.835 ==> default: Starting domain. 00:01:17.737 ==> default: Waiting for domain to get an IP address... 00:01:35.817 ==> default: Waiting for SSH to become available... 00:01:35.817 ==> default: Configuring and enabling network interfaces... 00:01:38.350 default: SSH address: 192.168.121.83:22 00:01:38.350 default: SSH username: vagrant 00:01:38.350 default: SSH auth method: private key 00:01:40.881 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:48.996 ==> default: Mounting SSHFS shared folder... 00:01:50.944 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:50.944 ==> default: Checking Mount.. 00:01:51.880 ==> default: Folder Successfully Mounted! 00:01:51.880 ==> default: Running provisioner: file... 00:01:52.815 default: ~/.gitconfig => .gitconfig 00:01:53.073 00:01:53.073 SUCCESS! 00:01:53.073 00:01:53.073 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:53.073 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.073 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:53.073 00:01:53.082 [Pipeline] } 00:01:53.096 [Pipeline] // stage 00:01:53.106 [Pipeline] dir 00:01:53.106 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:53.108 [Pipeline] { 00:01:53.120 [Pipeline] catchError 00:01:53.122 [Pipeline] { 00:01:53.134 [Pipeline] sh 00:01:53.413 + vagrant ssh-config --host vagrant 00:01:53.413 + sed -ne /^Host/,$p 00:01:53.413 + tee ssh_conf 00:01:57.618 Host vagrant 00:01:57.618 HostName 192.168.121.83 00:01:57.618 User vagrant 00:01:57.618 Port 22 00:01:57.618 UserKnownHostsFile /dev/null 00:01:57.618 StrictHostKeyChecking no 00:01:57.618 PasswordAuthentication no 00:01:57.618 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:57.618 IdentitiesOnly yes 00:01:57.618 LogLevel FATAL 00:01:57.618 ForwardAgent yes 00:01:57.618 ForwardX11 yes 00:01:57.618 00:01:57.631 [Pipeline] withEnv 00:01:57.633 [Pipeline] { 00:01:57.646 [Pipeline] sh 00:01:57.923 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:57.923 source /etc/os-release 00:01:57.923 [[ -e /image.version ]] && img=$(< /image.version) 00:01:57.923 # Minimal, systemd-like check. 00:01:57.923 if [[ -e /.dockerenv ]]; then 00:01:57.923 # Clear garbage from the node's name: 00:01:57.923 # agt-er_autotest_547-896 -> autotest_547-896 00:01:57.923 # $HOSTNAME is the actual container id 00:01:57.923 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:57.923 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:57.923 # We can assume this is a mount from a host where container is running, 00:01:57.924 # so fetch its hostname to easily identify the target swarm worker. 00:01:57.924 container="$(< /etc/hostname) ($agent)" 00:01:57.924 else 00:01:57.924 # Fallback 00:01:57.924 container=$agent 00:01:57.924 fi 00:01:57.924 fi 00:01:57.924 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:57.924 00:01:58.193 [Pipeline] } 00:01:58.210 [Pipeline] // withEnv 00:01:58.219 [Pipeline] setCustomBuildProperty 00:01:58.233 [Pipeline] stage 00:01:58.236 [Pipeline] { (Tests) 00:01:58.252 [Pipeline] sh 00:01:58.533 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:58.807 [Pipeline] sh 00:01:59.086 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.360 [Pipeline] timeout 00:01:59.361 Timeout set to expire in 1 hr 30 min 00:01:59.363 [Pipeline] { 00:01:59.377 [Pipeline] sh 00:01:59.658 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.224 HEAD is now at 23429eed7 bdev: Clean up duplicated asserts in bdev_io_pull_data() 00:02:00.234 [Pipeline] sh 00:02:00.510 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:00.781 [Pipeline] sh 00:02:01.058 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.333 [Pipeline] sh 00:02:01.612 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:01.871 ++ readlink -f spdk_repo 00:02:01.871 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:01.871 + [[ -n /home/vagrant/spdk_repo ]] 00:02:01.871 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:01.871 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:01.871 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:01.871 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:01.871 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:01.871 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:01.871 + cd /home/vagrant/spdk_repo 00:02:01.871 + source /etc/os-release 00:02:01.871 ++ NAME='Fedora Linux' 00:02:01.871 ++ VERSION='39 (Cloud Edition)' 00:02:01.871 ++ ID=fedora 00:02:01.871 ++ VERSION_ID=39 00:02:01.871 ++ VERSION_CODENAME= 00:02:01.871 ++ PLATFORM_ID=platform:f39 00:02:01.871 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:01.871 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.871 ++ LOGO=fedora-logo-icon 00:02:01.871 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:01.871 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.871 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:01.871 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.871 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.871 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.871 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:01.871 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.871 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:01.871 ++ SUPPORT_END=2024-11-12 00:02:01.871 ++ VARIANT='Cloud Edition' 00:02:01.871 ++ VARIANT_ID=cloud 00:02:01.871 + uname -a 00:02:01.871 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:01.871 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:02.130 Hugepages 00:02:02.130 node hugesize free / total 00:02:02.130 node0 1048576kB 0 / 0 00:02:02.389 node0 2048kB 0 / 0 00:02:02.389 00:02:02.389 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.389 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.389 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.389 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.389 + rm -f /tmp/spdk-ld-path 00:02:02.389 + source autorun-spdk.conf 00:02:02.389 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.389 ++ SPDK_RUN_ASAN=1 00:02:02.389 ++ SPDK_RUN_UBSAN=1 00:02:02.389 ++ SPDK_TEST_RAID=1 00:02:02.389 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.389 ++ RUN_NIGHTLY=0 00:02:02.389 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.389 + [[ -n '' ]] 00:02:02.389 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.389 + for M in /var/spdk/build-*-manifest.txt 00:02:02.389 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:02.389 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.389 + for M in /var/spdk/build-*-manifest.txt 00:02:02.389 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.389 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.389 + for M in /var/spdk/build-*-manifest.txt 00:02:02.389 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.389 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.389 ++ uname 00:02:02.389 + [[ Linux == \L\i\n\u\x ]] 00:02:02.389 + sudo dmesg -T 00:02:02.389 + sudo dmesg --clear 00:02:02.389 + dmesg_pid=5261 00:02:02.389 + sudo dmesg -Tw 00:02:02.389 + [[ Fedora Linux == FreeBSD ]] 00:02:02.389 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.389 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.389 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.389 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.389 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.389 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.389 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.389 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.389 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.389 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.389 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.389 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.389 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.389 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.389 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.752 14:19:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:02.752 14:19:03 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.752 14:19:03 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.752 14:19:03 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:02.752 14:19:03 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:02.752 14:19:03 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:02.752 14:19:03 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.752 14:19:03 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:02.752 14:19:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:02.752 14:19:03 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.752 14:19:03 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:02.752 14:19:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.752 14:19:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:02.752 14:19:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.752 14:19:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.752 14:19:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.752 14:19:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.752 14:19:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.752 14:19:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.752 14:19:03 -- paths/export.sh@5 -- $ export PATH 00:02:02.752 14:19:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.752 14:19:03 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.752 14:19:03 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:02.752 14:19:03 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732112343.XXXXXX 00:02:02.752 14:19:03 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732112343.Xr6qKb 00:02:02.752 14:19:03 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:02.752 14:19:03 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:02.752 14:19:03 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:02.752 14:19:03 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.752 14:19:03 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.752 14:19:03 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:02.752 14:19:03 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:02.752 14:19:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.752 14:19:03 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:02.752 14:19:03 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:02.752 14:19:03 -- pm/common@17 -- $ local monitor 00:02:02.752 14:19:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.752 14:19:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.752 14:19:03 -- pm/common@25 -- $ sleep 1 00:02:02.752 14:19:03 -- pm/common@21 -- $ date +%s 00:02:02.752 14:19:03 -- pm/common@21 -- $ date +%s 00:02:02.752 14:19:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732112343 00:02:02.752 14:19:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732112343 00:02:02.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732112343_collect-cpu-load.pm.log 00:02:02.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732112343_collect-vmstat.pm.log 00:02:03.688 14:19:04 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:03.688 14:19:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.688 14:19:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.688 14:19:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.688 14:19:04 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.688 Wed Nov 20 02:19:04 PM UTC 2024 00:02:03.688 14:19:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.688 v25.01-pre-228-g23429eed7 00:02:03.688 14:19:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:03.688 14:19:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:03.688 14:19:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:03.688 14:19:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:03.688 14:19:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.688 ************************************ 00:02:03.688 START TEST asan 00:02:03.688 ************************************ 00:02:03.688 using asan 00:02:03.688 14:19:04 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:03.688 00:02:03.688 real 0m0.000s 00:02:03.688 user 0m0.000s 00:02:03.688 sys 0m0.000s 00:02:03.688 14:19:04 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:03.688 14:19:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.688 ************************************ 00:02:03.688 END TEST asan 00:02:03.688 ************************************ 00:02:03.688 14:19:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.688 14:19:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.688 14:19:04 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:03.688 14:19:04 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:03.688 14:19:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.688 ************************************ 00:02:03.688 START TEST ubsan 00:02:03.688 ************************************ 00:02:03.688 using ubsan 00:02:03.688 14:19:04 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:03.688 00:02:03.688 real 0m0.000s 00:02:03.688 user 0m0.000s 00:02:03.688 sys 0m0.000s 00:02:03.688 14:19:04 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:03.688 14:19:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.688 ************************************ 00:02:03.688 END TEST ubsan 00:02:03.688 ************************************ 00:02:03.688 14:19:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.688 14:19:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.688 14:19:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.688 14:19:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.688 14:19:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.688 14:19:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.688 14:19:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.688 14:19:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.688 14:19:04 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:03.947 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:03.947 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.205 Using 'verbs' RDMA provider 00:02:20.014 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:32.211 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:32.211 Creating mk/config.mk...done. 00:02:32.211 Creating mk/cc.flags.mk...done. 00:02:32.211 Type 'make' to build. 00:02:32.211 14:19:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:32.211 14:19:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:32.211 14:19:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:32.211 14:19:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.211 ************************************ 00:02:32.211 START TEST make 00:02:32.211 ************************************ 00:02:32.211 14:19:32 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:32.211 make[1]: Nothing to be done for 'all'. 00:02:47.079 The Meson build system 00:02:47.079 Version: 1.5.0 00:02:47.079 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:47.079 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:47.079 Build type: native build 00:02:47.079 Program cat found: YES (/usr/bin/cat) 00:02:47.079 Project name: DPDK 00:02:47.079 Project version: 24.03.0 00:02:47.079 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:47.079 C linker for the host machine: cc ld.bfd 2.40-14 00:02:47.079 Host machine cpu family: x86_64 00:02:47.079 Host machine cpu: x86_64 00:02:47.079 Message: ## Building in Developer Mode ## 00:02:47.079 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:47.079 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:47.079 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:47.079 Program python3 found: YES (/usr/bin/python3) 00:02:47.079 Program cat found: YES (/usr/bin/cat) 00:02:47.079 Compiler for C supports arguments -march=native: YES 00:02:47.079 Checking for size of "void *" : 8 00:02:47.079 Checking for size of "void *" : 8 (cached) 00:02:47.079 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:47.079 Library m found: YES 00:02:47.079 Library numa found: YES 00:02:47.079 Has header "numaif.h" : YES 00:02:47.079 Library fdt found: NO 00:02:47.079 Library execinfo found: NO 00:02:47.079 Has header "execinfo.h" : YES 00:02:47.079 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:47.079 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:47.079 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:47.079 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:47.079 Run-time dependency openssl found: YES 3.1.1 00:02:47.079 Run-time dependency libpcap found: YES 1.10.4 00:02:47.079 Has header "pcap.h" with dependency libpcap: YES 00:02:47.079 Compiler for C supports arguments -Wcast-qual: YES 00:02:47.079 Compiler for C supports arguments -Wdeprecated: YES 00:02:47.079 Compiler for C supports arguments -Wformat: YES 00:02:47.079 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:47.079 Compiler for C supports arguments -Wformat-security: NO 00:02:47.079 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:47.079 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:47.079 Compiler for C supports arguments -Wnested-externs: YES 00:02:47.079 Compiler for C supports arguments -Wold-style-definition: YES 00:02:47.079 Compiler for C supports arguments -Wpointer-arith: YES 00:02:47.079 Compiler for C supports arguments -Wsign-compare: YES 00:02:47.079 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:47.079 Compiler for C supports arguments -Wundef: YES 00:02:47.079 Compiler for C supports arguments -Wwrite-strings: YES 00:02:47.079 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:47.079 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:47.079 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:47.079 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:47.079 Program objdump found: YES (/usr/bin/objdump) 00:02:47.079 Compiler for C supports arguments -mavx512f: YES 00:02:47.079 Checking if "AVX512 checking" compiles: YES 00:02:47.079 Fetching value of define "__SSE4_2__" : 1 00:02:47.079 Fetching value of define "__AES__" : 1 00:02:47.079 Fetching value of define "__AVX__" : 1 00:02:47.079 Fetching value of define "__AVX2__" : 1 00:02:47.079 Fetching value of define "__AVX512BW__" : (undefined) 00:02:47.079 Fetching value of define "__AVX512CD__" : (undefined) 00:02:47.079 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:47.079 Fetching value of define "__AVX512F__" : (undefined) 00:02:47.079 Fetching value of define "__AVX512VL__" : (undefined) 00:02:47.079 Fetching value of define "__PCLMUL__" : 1 00:02:47.079 Fetching value of define "__RDRND__" : 1 00:02:47.079 Fetching value of define "__RDSEED__" : 1 00:02:47.079 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:47.079 Fetching value of define "__znver1__" : (undefined) 00:02:47.080 Fetching value of define "__znver2__" : (undefined) 00:02:47.080 Fetching value of define "__znver3__" : (undefined) 00:02:47.080 Fetching value of define "__znver4__" : (undefined) 00:02:47.080 Library asan found: YES 00:02:47.080 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:47.080 Message: lib/log: Defining dependency "log" 00:02:47.080 Message: lib/kvargs: Defining dependency "kvargs" 00:02:47.080 Message: lib/telemetry: Defining dependency "telemetry" 00:02:47.080 Library rt found: YES 00:02:47.080 Checking for function "getentropy" : NO 00:02:47.080 Message: lib/eal: Defining dependency "eal" 00:02:47.080 Message: lib/ring: Defining dependency "ring" 00:02:47.080 Message: lib/rcu: Defining dependency "rcu" 00:02:47.080 Message: lib/mempool: Defining dependency "mempool" 00:02:47.080 Message: lib/mbuf: Defining dependency "mbuf" 00:02:47.080 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:47.080 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:47.080 Compiler for C supports arguments -mpclmul: YES 00:02:47.080 Compiler for C supports arguments -maes: YES 00:02:47.080 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:47.080 Compiler for C supports arguments -mavx512bw: YES 00:02:47.080 Compiler for C supports arguments -mavx512dq: YES 00:02:47.080 Compiler for C supports arguments -mavx512vl: YES 00:02:47.080 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:47.080 Compiler for C supports arguments -mavx2: YES 00:02:47.080 Compiler for C supports arguments -mavx: YES 00:02:47.080 Message: lib/net: Defining dependency "net" 00:02:47.080 Message: lib/meter: Defining dependency "meter" 00:02:47.080 Message: lib/ethdev: Defining dependency "ethdev" 00:02:47.080 Message: lib/pci: Defining dependency "pci" 00:02:47.080 Message: lib/cmdline: Defining dependency "cmdline" 00:02:47.080 Message: lib/hash: Defining dependency "hash" 00:02:47.080 Message: lib/timer: Defining dependency "timer" 00:02:47.080 Message: lib/compressdev: Defining dependency "compressdev" 00:02:47.080 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:47.080 Message: lib/dmadev: Defining dependency "dmadev" 00:02:47.080 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:47.080 Message: lib/power: Defining dependency "power" 00:02:47.080 Message: lib/reorder: Defining dependency "reorder" 00:02:47.080 Message: lib/security: Defining dependency "security" 00:02:47.080 Has header "linux/userfaultfd.h" : YES 00:02:47.080 Has header "linux/vduse.h" : YES 00:02:47.080 Message: lib/vhost: Defining dependency "vhost" 00:02:47.080 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:47.080 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:47.080 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:47.080 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:47.080 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:47.080 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:47.080 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:47.080 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:47.080 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:47.080 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:47.080 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:47.080 Configuring doxy-api-html.conf using configuration 00:02:47.080 Configuring doxy-api-man.conf using configuration 00:02:47.080 Program mandb found: YES (/usr/bin/mandb) 00:02:47.080 Program sphinx-build found: NO 00:02:47.080 Configuring rte_build_config.h using configuration 00:02:47.080 Message: 00:02:47.080 ================= 00:02:47.080 Applications Enabled 00:02:47.080 ================= 00:02:47.080 00:02:47.080 apps: 00:02:47.080 00:02:47.080 00:02:47.080 Message: 00:02:47.080 ================= 00:02:47.080 Libraries Enabled 00:02:47.080 ================= 00:02:47.080 00:02:47.080 libs: 00:02:47.080 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:47.080 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:47.080 cryptodev, dmadev, power, reorder, security, vhost, 00:02:47.080 00:02:47.080 Message: 00:02:47.080 =============== 00:02:47.080 Drivers Enabled 00:02:47.080 =============== 00:02:47.080 00:02:47.080 common: 00:02:47.080 00:02:47.080 bus: 00:02:47.080 pci, vdev, 00:02:47.080 mempool: 00:02:47.080 ring, 00:02:47.080 dma: 00:02:47.080 00:02:47.080 net: 00:02:47.080 00:02:47.080 crypto: 00:02:47.080 00:02:47.080 compress: 00:02:47.080 00:02:47.080 vdpa: 00:02:47.080 00:02:47.080 00:02:47.080 Message: 00:02:47.080 ================= 00:02:47.080 Content Skipped 00:02:47.080 ================= 00:02:47.080 00:02:47.080 apps: 00:02:47.080 dumpcap: explicitly disabled via build config 00:02:47.080 graph: explicitly disabled via build config 00:02:47.080 pdump: explicitly disabled via build config 00:02:47.080 proc-info: explicitly disabled via build config 00:02:47.080 test-acl: explicitly disabled via build config 00:02:47.080 test-bbdev: explicitly disabled via build config 00:02:47.080 test-cmdline: explicitly disabled via build config 00:02:47.080 test-compress-perf: explicitly disabled via build config 00:02:47.080 test-crypto-perf: explicitly disabled via build config 00:02:47.080 test-dma-perf: explicitly disabled via build config 00:02:47.080 test-eventdev: explicitly disabled via build config 00:02:47.080 test-fib: explicitly disabled via build config 00:02:47.080 test-flow-perf: explicitly disabled via build config 00:02:47.080 test-gpudev: explicitly disabled via build config 00:02:47.080 test-mldev: explicitly disabled via build config 00:02:47.080 test-pipeline: explicitly disabled via build config 00:02:47.080 test-pmd: explicitly disabled via build config 00:02:47.080 test-regex: explicitly disabled via build config 00:02:47.080 test-sad: explicitly disabled via build config 00:02:47.080 test-security-perf: explicitly disabled via build config 00:02:47.080 00:02:47.080 libs: 00:02:47.080 argparse: explicitly disabled via build config 00:02:47.080 metrics: explicitly disabled via build config 00:02:47.080 acl: explicitly disabled via build config 00:02:47.080 bbdev: explicitly disabled via build config 00:02:47.080 bitratestats: explicitly disabled via build config 00:02:47.080 bpf: explicitly disabled via build config 00:02:47.080 cfgfile: explicitly disabled via build config 00:02:47.080 distributor: explicitly disabled via build config 00:02:47.080 efd: explicitly disabled via build config 00:02:47.080 eventdev: explicitly disabled via build config 00:02:47.080 dispatcher: explicitly disabled via build config 00:02:47.080 gpudev: explicitly disabled via build config 00:02:47.080 gro: explicitly disabled via build config 00:02:47.080 gso: explicitly disabled via build config 00:02:47.080 ip_frag: explicitly disabled via build config 00:02:47.080 jobstats: explicitly disabled via build config 00:02:47.080 latencystats: explicitly disabled via build config 00:02:47.080 lpm: explicitly disabled via build config 00:02:47.080 member: explicitly disabled via build config 00:02:47.080 pcapng: explicitly disabled via build config 00:02:47.080 rawdev: explicitly disabled via build config 00:02:47.080 regexdev: explicitly disabled via build config 00:02:47.080 mldev: explicitly disabled via build config 00:02:47.080 rib: explicitly disabled via build config 00:02:47.080 sched: explicitly disabled via build config 00:02:47.080 stack: explicitly disabled via build config 00:02:47.080 ipsec: explicitly disabled via build config 00:02:47.080 pdcp: explicitly disabled via build config 00:02:47.080 fib: explicitly disabled via build config 00:02:47.080 port: explicitly disabled via build config 00:02:47.080 pdump: explicitly disabled via build config 00:02:47.080 table: explicitly disabled via build config 00:02:47.080 pipeline: explicitly disabled via build config 00:02:47.080 graph: explicitly disabled via build config 00:02:47.080 node: explicitly disabled via build config 00:02:47.080 00:02:47.080 drivers: 00:02:47.080 common/cpt: not in enabled drivers build config 00:02:47.080 common/dpaax: not in enabled drivers build config 00:02:47.080 common/iavf: not in enabled drivers build config 00:02:47.080 common/idpf: not in enabled drivers build config 00:02:47.080 common/ionic: not in enabled drivers build config 00:02:47.080 common/mvep: not in enabled drivers build config 00:02:47.080 common/octeontx: not in enabled drivers build config 00:02:47.080 bus/auxiliary: not in enabled drivers build config 00:02:47.080 bus/cdx: not in enabled drivers build config 00:02:47.080 bus/dpaa: not in enabled drivers build config 00:02:47.080 bus/fslmc: not in enabled drivers build config 00:02:47.080 bus/ifpga: not in enabled drivers build config 00:02:47.080 bus/platform: not in enabled drivers build config 00:02:47.080 bus/uacce: not in enabled drivers build config 00:02:47.080 bus/vmbus: not in enabled drivers build config 00:02:47.080 common/cnxk: not in enabled drivers build config 00:02:47.080 common/mlx5: not in enabled drivers build config 00:02:47.080 common/nfp: not in enabled drivers build config 00:02:47.080 common/nitrox: not in enabled drivers build config 00:02:47.080 common/qat: not in enabled drivers build config 00:02:47.080 common/sfc_efx: not in enabled drivers build config 00:02:47.080 mempool/bucket: not in enabled drivers build config 00:02:47.080 mempool/cnxk: not in enabled drivers build config 00:02:47.080 mempool/dpaa: not in enabled drivers build config 00:02:47.080 mempool/dpaa2: not in enabled drivers build config 00:02:47.081 mempool/octeontx: not in enabled drivers build config 00:02:47.081 mempool/stack: not in enabled drivers build config 00:02:47.081 dma/cnxk: not in enabled drivers build config 00:02:47.081 dma/dpaa: not in enabled drivers build config 00:02:47.081 dma/dpaa2: not in enabled drivers build config 00:02:47.081 dma/hisilicon: not in enabled drivers build config 00:02:47.081 dma/idxd: not in enabled drivers build config 00:02:47.081 dma/ioat: not in enabled drivers build config 00:02:47.081 dma/skeleton: not in enabled drivers build config 00:02:47.081 net/af_packet: not in enabled drivers build config 00:02:47.081 net/af_xdp: not in enabled drivers build config 00:02:47.081 net/ark: not in enabled drivers build config 00:02:47.081 net/atlantic: not in enabled drivers build config 00:02:47.081 net/avp: not in enabled drivers build config 00:02:47.081 net/axgbe: not in enabled drivers build config 00:02:47.081 net/bnx2x: not in enabled drivers build config 00:02:47.081 net/bnxt: not in enabled drivers build config 00:02:47.081 net/bonding: not in enabled drivers build config 00:02:47.081 net/cnxk: not in enabled drivers build config 00:02:47.081 net/cpfl: not in enabled drivers build config 00:02:47.081 net/cxgbe: not in enabled drivers build config 00:02:47.081 net/dpaa: not in enabled drivers build config 00:02:47.081 net/dpaa2: not in enabled drivers build config 00:02:47.081 net/e1000: not in enabled drivers build config 00:02:47.081 net/ena: not in enabled drivers build config 00:02:47.081 net/enetc: not in enabled drivers build config 00:02:47.081 net/enetfec: not in enabled drivers build config 00:02:47.081 net/enic: not in enabled drivers build config 00:02:47.081 net/failsafe: not in enabled drivers build config 00:02:47.081 net/fm10k: not in enabled drivers build config 00:02:47.081 net/gve: not in enabled drivers build config 00:02:47.081 net/hinic: not in enabled drivers build config 00:02:47.081 net/hns3: not in enabled drivers build config 00:02:47.081 net/i40e: not in enabled drivers build config 00:02:47.081 net/iavf: not in enabled drivers build config 00:02:47.081 net/ice: not in enabled drivers build config 00:02:47.081 net/idpf: not in enabled drivers build config 00:02:47.081 net/igc: not in enabled drivers build config 00:02:47.081 net/ionic: not in enabled drivers build config 00:02:47.081 net/ipn3ke: not in enabled drivers build config 00:02:47.081 net/ixgbe: not in enabled drivers build config 00:02:47.081 net/mana: not in enabled drivers build config 00:02:47.081 net/memif: not in enabled drivers build config 00:02:47.081 net/mlx4: not in enabled drivers build config 00:02:47.081 net/mlx5: not in enabled drivers build config 00:02:47.081 net/mvneta: not in enabled drivers build config 00:02:47.081 net/mvpp2: not in enabled drivers build config 00:02:47.081 net/netvsc: not in enabled drivers build config 00:02:47.081 net/nfb: not in enabled drivers build config 00:02:47.081 net/nfp: not in enabled drivers build config 00:02:47.081 net/ngbe: not in enabled drivers build config 00:02:47.081 net/null: not in enabled drivers build config 00:02:47.081 net/octeontx: not in enabled drivers build config 00:02:47.081 net/octeon_ep: not in enabled drivers build config 00:02:47.081 net/pcap: not in enabled drivers build config 00:02:47.081 net/pfe: not in enabled drivers build config 00:02:47.081 net/qede: not in enabled drivers build config 00:02:47.081 net/ring: not in enabled drivers build config 00:02:47.081 net/sfc: not in enabled drivers build config 00:02:47.081 net/softnic: not in enabled drivers build config 00:02:47.081 net/tap: not in enabled drivers build config 00:02:47.081 net/thunderx: not in enabled drivers build config 00:02:47.081 net/txgbe: not in enabled drivers build config 00:02:47.081 net/vdev_netvsc: not in enabled drivers build config 00:02:47.081 net/vhost: not in enabled drivers build config 00:02:47.081 net/virtio: not in enabled drivers build config 00:02:47.081 net/vmxnet3: not in enabled drivers build config 00:02:47.081 raw/*: missing internal dependency, "rawdev" 00:02:47.081 crypto/armv8: not in enabled drivers build config 00:02:47.081 crypto/bcmfs: not in enabled drivers build config 00:02:47.081 crypto/caam_jr: not in enabled drivers build config 00:02:47.081 crypto/ccp: not in enabled drivers build config 00:02:47.081 crypto/cnxk: not in enabled drivers build config 00:02:47.081 crypto/dpaa_sec: not in enabled drivers build config 00:02:47.081 crypto/dpaa2_sec: not in enabled drivers build config 00:02:47.081 crypto/ipsec_mb: not in enabled drivers build config 00:02:47.081 crypto/mlx5: not in enabled drivers build config 00:02:47.081 crypto/mvsam: not in enabled drivers build config 00:02:47.081 crypto/nitrox: not in enabled drivers build config 00:02:47.081 crypto/null: not in enabled drivers build config 00:02:47.081 crypto/octeontx: not in enabled drivers build config 00:02:47.081 crypto/openssl: not in enabled drivers build config 00:02:47.081 crypto/scheduler: not in enabled drivers build config 00:02:47.081 crypto/uadk: not in enabled drivers build config 00:02:47.081 crypto/virtio: not in enabled drivers build config 00:02:47.081 compress/isal: not in enabled drivers build config 00:02:47.081 compress/mlx5: not in enabled drivers build config 00:02:47.081 compress/nitrox: not in enabled drivers build config 00:02:47.081 compress/octeontx: not in enabled drivers build config 00:02:47.081 compress/zlib: not in enabled drivers build config 00:02:47.081 regex/*: missing internal dependency, "regexdev" 00:02:47.081 ml/*: missing internal dependency, "mldev" 00:02:47.081 vdpa/ifc: not in enabled drivers build config 00:02:47.081 vdpa/mlx5: not in enabled drivers build config 00:02:47.081 vdpa/nfp: not in enabled drivers build config 00:02:47.081 vdpa/sfc: not in enabled drivers build config 00:02:47.081 event/*: missing internal dependency, "eventdev" 00:02:47.081 baseband/*: missing internal dependency, "bbdev" 00:02:47.081 gpu/*: missing internal dependency, "gpudev" 00:02:47.081 00:02:47.081 00:02:47.081 Build targets in project: 85 00:02:47.081 00:02:47.081 DPDK 24.03.0 00:02:47.081 00:02:47.081 User defined options 00:02:47.081 buildtype : debug 00:02:47.081 default_library : shared 00:02:47.081 libdir : lib 00:02:47.081 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:47.081 b_sanitize : address 00:02:47.081 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:47.081 c_link_args : 00:02:47.081 cpu_instruction_set: native 00:02:47.081 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:47.081 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:47.081 enable_docs : false 00:02:47.081 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:47.081 enable_kmods : false 00:02:47.081 max_lcores : 128 00:02:47.081 tests : false 00:02:47.081 00:02:47.081 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:47.081 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:47.081 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:47.081 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:47.340 [3/268] Linking static target lib/librte_log.a 00:02:47.340 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:47.340 [5/268] Linking static target lib/librte_kvargs.a 00:02:47.340 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:47.905 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.905 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:47.905 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.905 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.162 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.162 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:48.162 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.162 [14/268] Linking static target lib/librte_telemetry.a 00:02:48.162 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:48.419 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.419 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.419 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.419 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.419 [20/268] Linking target lib/librte_log.so.24.1 00:02:48.985 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:48.985 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:48.985 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:48.985 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.242 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.242 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.242 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.242 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:49.242 [29/268] Linking target lib/librte_telemetry.so.24.1 00:02:49.242 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.500 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.500 [32/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.759 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.759 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:50.017 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:50.017 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:50.276 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:50.276 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:50.276 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.276 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:50.276 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:50.276 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:50.276 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:50.276 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:50.276 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:50.534 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:50.792 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.792 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.792 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.792 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:51.050 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:51.307 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:51.307 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:51.307 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.308 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.308 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:51.565 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:51.565 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.565 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.823 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.823 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.823 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.823 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.823 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:52.081 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.338 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:52.338 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:52.338 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.339 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.339 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:52.596 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.596 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:52.596 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.596 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.596 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.596 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.854 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:53.199 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:53.199 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.199 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:53.199 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.199 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.199 [83/268] Linking static target lib/librte_ring.a 00:02:53.199 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.199 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.456 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.713 [87/268] Linking static target lib/librte_eal.a 00:02:53.713 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.713 [89/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.713 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.969 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.969 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.969 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:54.227 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:54.227 [95/268] Linking static target lib/librte_mempool.a 00:02:54.227 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:54.484 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:54.484 [98/268] Linking static target lib/librte_rcu.a 00:02:54.484 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:54.484 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:54.741 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.741 [102/268] Linking static target lib/librte_mbuf.a 00:02:54.741 [103/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:54.741 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:54.741 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:54.998 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:54.998 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.998 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:54.998 [109/268] Linking static target lib/librte_net.a 00:02:55.255 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.255 [111/268] Linking static target lib/librte_meter.a 00:02:55.514 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:55.514 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.514 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.514 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.514 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.772 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.772 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.772 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:56.030 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.597 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.597 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:56.855 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:57.114 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.114 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:57.114 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:57.114 [127/268] Linking static target lib/librte_pci.a 00:02:57.114 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:57.114 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:57.371 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:57.371 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:57.371 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:57.372 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.659 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:57.659 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:57.659 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.659 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.659 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:57.659 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.659 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.659 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.659 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:57.659 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:57.935 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.935 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:58.193 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:58.193 [147/268] Linking static target lib/librte_cmdline.a 00:02:58.451 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:58.451 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:58.451 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.708 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.708 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.967 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.967 [154/268] Linking static target lib/librte_timer.a 00:02:58.967 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:59.224 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.224 [157/268] Linking static target lib/librte_ethdev.a 00:02:59.224 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:59.224 [159/268] Linking static target lib/librte_compressdev.a 00:02:59.224 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:59.224 [161/268] Linking static target lib/librte_hash.a 00:02:59.224 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:59.790 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:59.790 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.790 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:00.048 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.048 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.048 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:00.048 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.048 [170/268] Linking static target lib/librte_dmadev.a 00:03:00.048 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:00.305 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.305 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.563 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.563 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.821 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:01.078 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.078 [178/268] Linking static target lib/librte_cryptodev.a 00:03:01.078 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.078 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:01.078 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:01.078 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:01.335 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:01.335 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:01.335 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.335 [186/268] Linking static target lib/librte_power.a 00:03:01.900 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.900 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.900 [189/268] Linking static target lib/librte_reorder.a 00:03:01.900 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.900 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:02.504 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.504 [193/268] Linking static target lib/librte_security.a 00:03:02.504 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:02.504 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.762 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.328 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.328 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:03.328 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.328 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:03.328 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.328 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:03.585 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.844 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:03.844 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.105 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.105 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.105 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.105 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.364 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.364 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.364 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.364 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.364 [214/268] Linking static target drivers/librte_bus_vdev.a 00:03:04.365 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.624 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.624 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.624 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.624 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:04.883 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.883 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.883 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.142 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.142 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.142 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.142 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.142 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.078 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.078 [229/268] Linking target lib/librte_eal.so.24.1 00:03:06.078 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:06.078 [231/268] Linking target lib/librte_ring.so.24.1 00:03:06.078 [232/268] Linking target lib/librte_meter.so.24.1 00:03:06.078 [233/268] Linking target lib/librte_timer.so.24.1 00:03:06.078 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:06.078 [235/268] Linking target lib/librte_pci.so.24.1 00:03:06.336 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:06.336 [237/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:06.337 [238/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.337 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:06.337 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:06.337 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:06.337 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:06.337 [243/268] Linking target lib/librte_rcu.so.24.1 00:03:06.337 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:06.337 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:06.595 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:06.595 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:06.595 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:06.595 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:06.853 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:06.853 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:06.853 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:06.853 [253/268] Linking target lib/librte_net.so.24.1 00:03:06.853 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:06.853 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:06.853 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:06.853 [257/268] Linking target lib/librte_hash.so.24.1 00:03:06.853 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:06.853 [259/268] Linking target lib/librte_security.so.24.1 00:03:07.113 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:07.680 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.680 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:07.938 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:07.938 [264/268] Linking target lib/librte_power.so.24.1 00:03:11.221 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.221 [266/268] Linking static target lib/librte_vhost.a 00:03:13.120 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.120 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:13.120 INFO: autodetecting backend as ninja 00:03:13.120 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:35.037 CC lib/ut_mock/mock.o 00:03:35.037 CC lib/ut/ut.o 00:03:35.037 CC lib/log/log.o 00:03:35.037 CC lib/log/log_flags.o 00:03:35.037 CC lib/log/log_deprecated.o 00:03:35.037 LIB libspdk_ut.a 00:03:35.037 LIB libspdk_ut_mock.a 00:03:35.037 LIB libspdk_log.a 00:03:35.037 SO libspdk_ut_mock.so.6.0 00:03:35.037 SO libspdk_ut.so.2.0 00:03:35.037 SO libspdk_log.so.7.1 00:03:35.037 SYMLINK libspdk_ut_mock.so 00:03:35.037 SYMLINK libspdk_ut.so 00:03:35.037 SYMLINK libspdk_log.so 00:03:35.037 CC lib/dma/dma.o 00:03:35.037 CC lib/ioat/ioat.o 00:03:35.037 CC lib/util/bit_array.o 00:03:35.037 CC lib/util/base64.o 00:03:35.037 CC lib/util/cpuset.o 00:03:35.037 CC lib/util/crc16.o 00:03:35.037 CC lib/util/crc32.o 00:03:35.037 CC lib/util/crc32c.o 00:03:35.037 CXX lib/trace_parser/trace.o 00:03:35.037 CC lib/vfio_user/host/vfio_user_pci.o 00:03:35.037 CC lib/vfio_user/host/vfio_user.o 00:03:35.037 CC lib/util/crc32_ieee.o 00:03:35.037 CC lib/util/crc64.o 00:03:35.037 CC lib/util/dif.o 00:03:35.037 CC lib/util/fd.o 00:03:35.037 LIB libspdk_ioat.a 00:03:35.037 CC lib/util/fd_group.o 00:03:35.037 SO libspdk_ioat.so.7.0 00:03:35.037 CC lib/util/file.o 00:03:35.037 CC lib/util/hexlify.o 00:03:35.037 LIB libspdk_dma.a 00:03:35.037 CC lib/util/iov.o 00:03:35.037 LIB libspdk_vfio_user.a 00:03:35.037 SO libspdk_dma.so.5.0 00:03:35.037 SYMLINK libspdk_ioat.so 00:03:35.037 SO libspdk_vfio_user.so.5.0 00:03:35.037 SYMLINK libspdk_dma.so 00:03:35.037 CC lib/util/math.o 00:03:35.037 CC lib/util/net.o 00:03:35.037 CC lib/util/pipe.o 00:03:35.037 SYMLINK libspdk_vfio_user.so 00:03:35.037 CC lib/util/strerror_tls.o 00:03:35.037 CC lib/util/string.o 00:03:35.037 CC lib/util/uuid.o 00:03:35.037 CC lib/util/xor.o 00:03:35.037 CC lib/util/zipf.o 00:03:35.037 CC lib/util/md5.o 00:03:35.037 LIB libspdk_util.a 00:03:35.037 LIB libspdk_trace_parser.a 00:03:35.037 SO libspdk_util.so.10.1 00:03:35.037 SO libspdk_trace_parser.so.6.0 00:03:35.037 SYMLINK libspdk_trace_parser.so 00:03:35.037 SYMLINK libspdk_util.so 00:03:35.037 CC lib/env_dpdk/env.o 00:03:35.037 CC lib/env_dpdk/pci.o 00:03:35.037 CC lib/env_dpdk/memory.o 00:03:35.037 CC lib/env_dpdk/threads.o 00:03:35.037 CC lib/env_dpdk/init.o 00:03:35.037 CC lib/vmd/vmd.o 00:03:35.037 CC lib/conf/conf.o 00:03:35.037 CC lib/idxd/idxd.o 00:03:35.037 CC lib/rdma_utils/rdma_utils.o 00:03:35.037 CC lib/json/json_parse.o 00:03:35.296 CC lib/env_dpdk/pci_ioat.o 00:03:35.296 CC lib/json/json_util.o 00:03:35.296 LIB libspdk_rdma_utils.a 00:03:35.296 SO libspdk_rdma_utils.so.1.0 00:03:35.555 LIB libspdk_conf.a 00:03:35.555 CC lib/vmd/led.o 00:03:35.555 SYMLINK libspdk_rdma_utils.so 00:03:35.555 SO libspdk_conf.so.6.0 00:03:35.555 CC lib/env_dpdk/pci_virtio.o 00:03:35.555 SYMLINK libspdk_conf.so 00:03:35.555 CC lib/env_dpdk/pci_vmd.o 00:03:35.555 CC lib/json/json_write.o 00:03:35.555 CC lib/env_dpdk/pci_idxd.o 00:03:35.555 CC lib/env_dpdk/pci_event.o 00:03:35.813 CC lib/rdma_provider/common.o 00:03:35.813 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:35.813 CC lib/env_dpdk/sigbus_handler.o 00:03:35.813 CC lib/env_dpdk/pci_dpdk.o 00:03:35.813 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.813 LIB libspdk_json.a 00:03:36.071 SO libspdk_json.so.6.0 00:03:36.071 CC lib/idxd/idxd_user.o 00:03:36.071 CC lib/idxd/idxd_kernel.o 00:03:36.071 LIB libspdk_rdma_provider.a 00:03:36.071 SYMLINK libspdk_json.so 00:03:36.071 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:36.071 SO libspdk_rdma_provider.so.7.0 00:03:36.071 SYMLINK libspdk_rdma_provider.so 00:03:36.329 CC lib/jsonrpc/jsonrpc_server.o 00:03:36.329 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:36.329 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:36.329 CC lib/jsonrpc/jsonrpc_client.o 00:03:36.329 LIB libspdk_idxd.a 00:03:36.329 SO libspdk_idxd.so.12.1 00:03:36.587 LIB libspdk_vmd.a 00:03:36.587 SYMLINK libspdk_idxd.so 00:03:36.587 SO libspdk_vmd.so.6.0 00:03:36.587 SYMLINK libspdk_vmd.so 00:03:36.587 LIB libspdk_jsonrpc.a 00:03:36.587 SO libspdk_jsonrpc.so.6.0 00:03:36.845 SYMLINK libspdk_jsonrpc.so 00:03:37.103 CC lib/rpc/rpc.o 00:03:37.103 LIB libspdk_env_dpdk.a 00:03:37.103 SO libspdk_env_dpdk.so.15.1 00:03:37.385 LIB libspdk_rpc.a 00:03:37.385 SO libspdk_rpc.so.6.0 00:03:37.385 SYMLINK libspdk_rpc.so 00:03:37.385 SYMLINK libspdk_env_dpdk.so 00:03:37.668 CC lib/keyring/keyring.o 00:03:37.668 CC lib/keyring/keyring_rpc.o 00:03:37.668 CC lib/notify/notify_rpc.o 00:03:37.668 CC lib/notify/notify.o 00:03:37.668 CC lib/trace/trace_flags.o 00:03:37.668 CC lib/trace/trace.o 00:03:37.668 CC lib/trace/trace_rpc.o 00:03:37.668 LIB libspdk_notify.a 00:03:37.927 SO libspdk_notify.so.6.0 00:03:37.927 SYMLINK libspdk_notify.so 00:03:37.927 LIB libspdk_keyring.a 00:03:37.927 SO libspdk_keyring.so.2.0 00:03:37.927 LIB libspdk_trace.a 00:03:37.927 SO libspdk_trace.so.11.0 00:03:37.927 SYMLINK libspdk_keyring.so 00:03:37.927 SYMLINK libspdk_trace.so 00:03:38.185 CC lib/sock/sock.o 00:03:38.185 CC lib/sock/sock_rpc.o 00:03:38.185 CC lib/thread/thread.o 00:03:38.185 CC lib/thread/iobuf.o 00:03:38.755 LIB libspdk_sock.a 00:03:39.013 SO libspdk_sock.so.10.0 00:03:39.013 SYMLINK libspdk_sock.so 00:03:39.270 CC lib/nvme/nvme_fabric.o 00:03:39.270 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:39.270 CC lib/nvme/nvme_ctrlr.o 00:03:39.270 CC lib/nvme/nvme_ns.o 00:03:39.270 CC lib/nvme/nvme_ns_cmd.o 00:03:39.270 CC lib/nvme/nvme_pcie.o 00:03:39.270 CC lib/nvme/nvme_pcie_common.o 00:03:39.270 CC lib/nvme/nvme_qpair.o 00:03:39.270 CC lib/nvme/nvme.o 00:03:40.205 CC lib/nvme/nvme_quirks.o 00:03:40.205 CC lib/nvme/nvme_transport.o 00:03:40.205 CC lib/nvme/nvme_discovery.o 00:03:40.205 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:40.205 LIB libspdk_thread.a 00:03:40.463 SO libspdk_thread.so.11.0 00:03:40.463 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.463 SYMLINK libspdk_thread.so 00:03:40.464 CC lib/nvme/nvme_tcp.o 00:03:40.721 CC lib/nvme/nvme_opal.o 00:03:40.721 CC lib/accel/accel.o 00:03:40.980 CC lib/nvme/nvme_io_msg.o 00:03:40.980 CC lib/blob/blobstore.o 00:03:40.980 CC lib/nvme/nvme_poll_group.o 00:03:40.980 CC lib/nvme/nvme_zns.o 00:03:41.239 CC lib/init/json_config.o 00:03:41.498 CC lib/virtio/virtio.o 00:03:41.498 CC lib/fsdev/fsdev.o 00:03:41.756 CC lib/init/subsystem.o 00:03:41.756 CC lib/init/subsystem_rpc.o 00:03:41.756 CC lib/accel/accel_rpc.o 00:03:41.756 CC lib/fsdev/fsdev_io.o 00:03:41.756 CC lib/fsdev/fsdev_rpc.o 00:03:41.756 CC lib/virtio/virtio_vhost_user.o 00:03:41.756 CC lib/init/rpc.o 00:03:42.015 CC lib/blob/request.o 00:03:42.015 CC lib/blob/zeroes.o 00:03:42.015 LIB libspdk_init.a 00:03:42.015 SO libspdk_init.so.6.0 00:03:42.273 SYMLINK libspdk_init.so 00:03:42.273 CC lib/blob/blob_bs_dev.o 00:03:42.273 CC lib/accel/accel_sw.o 00:03:42.273 CC lib/virtio/virtio_vfio_user.o 00:03:42.273 CC lib/virtio/virtio_pci.o 00:03:42.273 CC lib/nvme/nvme_stubs.o 00:03:42.531 CC lib/event/app.o 00:03:42.531 CC lib/event/reactor.o 00:03:42.531 LIB libspdk_fsdev.a 00:03:42.531 CC lib/event/log_rpc.o 00:03:42.531 SO libspdk_fsdev.so.2.0 00:03:42.531 CC lib/event/app_rpc.o 00:03:42.531 CC lib/event/scheduler_static.o 00:03:42.531 SYMLINK libspdk_fsdev.so 00:03:42.531 CC lib/nvme/nvme_auth.o 00:03:42.531 LIB libspdk_accel.a 00:03:42.790 LIB libspdk_virtio.a 00:03:42.790 SO libspdk_accel.so.16.0 00:03:42.790 SO libspdk_virtio.so.7.0 00:03:42.790 SYMLINK libspdk_accel.so 00:03:42.790 CC lib/nvme/nvme_cuse.o 00:03:42.790 CC lib/nvme/nvme_rdma.o 00:03:42.790 SYMLINK libspdk_virtio.so 00:03:42.790 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:43.048 CC lib/bdev/bdev.o 00:03:43.048 CC lib/bdev/bdev_rpc.o 00:03:43.048 CC lib/bdev/bdev_zone.o 00:03:43.048 CC lib/bdev/part.o 00:03:43.048 LIB libspdk_event.a 00:03:43.048 SO libspdk_event.so.14.0 00:03:43.305 SYMLINK libspdk_event.so 00:03:43.305 CC lib/bdev/scsi_nvme.o 00:03:43.872 LIB libspdk_fuse_dispatcher.a 00:03:43.872 SO libspdk_fuse_dispatcher.so.1.0 00:03:43.872 SYMLINK libspdk_fuse_dispatcher.so 00:03:44.439 LIB libspdk_nvme.a 00:03:44.698 SO libspdk_nvme.so.15.0 00:03:45.264 SYMLINK libspdk_nvme.so 00:03:45.524 LIB libspdk_blob.a 00:03:45.524 SO libspdk_blob.so.11.0 00:03:45.524 SYMLINK libspdk_blob.so 00:03:45.783 CC lib/blobfs/blobfs.o 00:03:45.783 CC lib/blobfs/tree.o 00:03:45.783 CC lib/lvol/lvol.o 00:03:46.717 LIB libspdk_bdev.a 00:03:46.975 SO libspdk_bdev.so.17.0 00:03:46.975 LIB libspdk_blobfs.a 00:03:46.975 SYMLINK libspdk_bdev.so 00:03:46.975 SO libspdk_blobfs.so.10.0 00:03:47.234 LIB libspdk_lvol.a 00:03:47.234 SYMLINK libspdk_blobfs.so 00:03:47.234 SO libspdk_lvol.so.10.0 00:03:47.234 CC lib/nvmf/ctrlr.o 00:03:47.234 CC lib/nvmf/ctrlr_discovery.o 00:03:47.234 CC lib/nvmf/subsystem.o 00:03:47.234 CC lib/nbd/nbd.o 00:03:47.234 CC lib/nvmf/ctrlr_bdev.o 00:03:47.234 CC lib/nbd/nbd_rpc.o 00:03:47.234 CC lib/ftl/ftl_core.o 00:03:47.234 CC lib/ublk/ublk.o 00:03:47.234 CC lib/scsi/dev.o 00:03:47.234 SYMLINK libspdk_lvol.so 00:03:47.234 CC lib/ublk/ublk_rpc.o 00:03:47.493 CC lib/scsi/lun.o 00:03:47.493 CC lib/ftl/ftl_init.o 00:03:47.493 CC lib/scsi/port.o 00:03:47.751 CC lib/scsi/scsi.o 00:03:47.751 CC lib/nvmf/nvmf.o 00:03:47.751 CC lib/ftl/ftl_layout.o 00:03:47.751 LIB libspdk_nbd.a 00:03:47.751 SO libspdk_nbd.so.7.0 00:03:47.751 CC lib/ftl/ftl_debug.o 00:03:47.751 CC lib/scsi/scsi_bdev.o 00:03:48.013 CC lib/nvmf/nvmf_rpc.o 00:03:48.013 SYMLINK libspdk_nbd.so 00:03:48.013 CC lib/scsi/scsi_pr.o 00:03:48.317 CC lib/nvmf/transport.o 00:03:48.317 LIB libspdk_ublk.a 00:03:48.317 SO libspdk_ublk.so.3.0 00:03:48.317 CC lib/ftl/ftl_io.o 00:03:48.318 SYMLINK libspdk_ublk.so 00:03:48.318 CC lib/nvmf/tcp.o 00:03:48.318 CC lib/nvmf/stubs.o 00:03:48.318 CC lib/nvmf/mdns_server.o 00:03:48.575 CC lib/scsi/scsi_rpc.o 00:03:48.575 CC lib/ftl/ftl_sb.o 00:03:48.575 CC lib/scsi/task.o 00:03:48.833 CC lib/ftl/ftl_l2p.o 00:03:48.833 CC lib/nvmf/rdma.o 00:03:48.833 LIB libspdk_scsi.a 00:03:48.833 CC lib/nvmf/auth.o 00:03:49.092 CC lib/ftl/ftl_l2p_flat.o 00:03:49.092 SO libspdk_scsi.so.9.0 00:03:49.092 CC lib/ftl/ftl_nv_cache.o 00:03:49.092 CC lib/ftl/ftl_band.o 00:03:49.092 CC lib/ftl/ftl_band_ops.o 00:03:49.092 CC lib/ftl/ftl_writer.o 00:03:49.092 SYMLINK libspdk_scsi.so 00:03:49.092 CC lib/ftl/ftl_rq.o 00:03:49.350 CC lib/ftl/ftl_reloc.o 00:03:49.350 CC lib/ftl/ftl_l2p_cache.o 00:03:49.609 CC lib/ftl/ftl_p2l.o 00:03:49.609 CC lib/ftl/ftl_p2l_log.o 00:03:49.609 CC lib/ftl/mngt/ftl_mngt.o 00:03:49.609 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:49.867 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:49.867 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:49.867 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.126 CC lib/iscsi/conn.o 00:03:50.126 CC lib/iscsi/init_grp.o 00:03:50.126 CC lib/vhost/vhost.o 00:03:50.126 CC lib/vhost/vhost_rpc.o 00:03:50.126 CC lib/vhost/vhost_scsi.o 00:03:50.385 CC lib/vhost/vhost_blk.o 00:03:50.385 CC lib/iscsi/iscsi.o 00:03:50.385 CC lib/vhost/rte_vhost_user.o 00:03:50.385 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.642 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.899 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.899 CC lib/iscsi/param.o 00:03:50.899 CC lib/iscsi/portal_grp.o 00:03:50.899 CC lib/iscsi/tgt_node.o 00:03:50.899 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.156 CC lib/iscsi/iscsi_subsystem.o 00:03:51.156 CC lib/iscsi/iscsi_rpc.o 00:03:51.156 CC lib/iscsi/task.o 00:03:51.156 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.450 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.450 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.450 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.450 CC lib/ftl/utils/ftl_conf.o 00:03:51.450 CC lib/ftl/utils/ftl_md.o 00:03:51.733 CC lib/ftl/utils/ftl_mempool.o 00:03:51.733 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.733 CC lib/ftl/utils/ftl_property.o 00:03:51.733 LIB libspdk_vhost.a 00:03:51.733 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.733 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.733 SO libspdk_vhost.so.8.0 00:03:51.733 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.991 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.991 LIB libspdk_nvmf.a 00:03:51.991 SYMLINK libspdk_vhost.so 00:03:51.991 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.991 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.991 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.991 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.991 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.991 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.991 SO libspdk_nvmf.so.20.0 00:03:51.991 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:52.250 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:52.250 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:52.250 CC lib/ftl/base/ftl_base_dev.o 00:03:52.250 CC lib/ftl/base/ftl_base_bdev.o 00:03:52.250 CC lib/ftl/ftl_trace.o 00:03:52.250 LIB libspdk_iscsi.a 00:03:52.250 SYMLINK libspdk_nvmf.so 00:03:52.509 SO libspdk_iscsi.so.8.0 00:03:52.509 SYMLINK libspdk_iscsi.so 00:03:52.509 LIB libspdk_ftl.a 00:03:53.076 SO libspdk_ftl.so.9.0 00:03:53.335 SYMLINK libspdk_ftl.so 00:03:53.593 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.851 CC module/accel/error/accel_error.o 00:03:53.851 CC module/accel/iaa/accel_iaa.o 00:03:53.851 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.851 CC module/sock/posix/posix.o 00:03:53.851 CC module/accel/dsa/accel_dsa.o 00:03:53.851 CC module/accel/ioat/accel_ioat.o 00:03:53.851 CC module/fsdev/aio/fsdev_aio.o 00:03:53.851 CC module/keyring/file/keyring.o 00:03:53.851 CC module/blob/bdev/blob_bdev.o 00:03:53.851 LIB libspdk_env_dpdk_rpc.a 00:03:53.851 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.851 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.109 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:54.109 CC module/keyring/file/keyring_rpc.o 00:03:54.109 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.109 LIB libspdk_scheduler_dynamic.a 00:03:54.109 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.109 SO libspdk_scheduler_dynamic.so.4.0 00:03:54.109 CC module/accel/error/accel_error_rpc.o 00:03:54.109 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.109 LIB libspdk_blob_bdev.a 00:03:54.109 LIB libspdk_accel_ioat.a 00:03:54.109 LIB libspdk_keyring_file.a 00:03:54.109 SO libspdk_blob_bdev.so.11.0 00:03:54.109 LIB libspdk_accel_iaa.a 00:03:54.109 SO libspdk_accel_ioat.so.6.0 00:03:54.109 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.109 SO libspdk_keyring_file.so.2.0 00:03:54.367 SO libspdk_accel_iaa.so.3.0 00:03:54.367 LIB libspdk_accel_error.a 00:03:54.367 SYMLINK libspdk_keyring_file.so 00:03:54.367 SYMLINK libspdk_accel_ioat.so 00:03:54.367 SYMLINK libspdk_blob_bdev.so 00:03:54.367 CC module/fsdev/aio/linux_aio_mgr.o 00:03:54.367 SO libspdk_accel_error.so.2.0 00:03:54.367 SYMLINK libspdk_accel_iaa.so 00:03:54.367 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.367 LIB libspdk_accel_dsa.a 00:03:54.367 SYMLINK libspdk_accel_error.so 00:03:54.367 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.367 SO libspdk_accel_dsa.so.5.0 00:03:54.624 CC module/keyring/linux/keyring.o 00:03:54.624 SYMLINK libspdk_accel_dsa.so 00:03:54.624 CC module/keyring/linux/keyring_rpc.o 00:03:54.624 LIB libspdk_scheduler_dpdk_governor.a 00:03:54.624 LIB libspdk_scheduler_gscheduler.a 00:03:54.624 CC module/bdev/delay/vbdev_delay.o 00:03:54.624 CC module/bdev/error/vbdev_error.o 00:03:54.624 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:54.624 SO libspdk_scheduler_gscheduler.so.4.0 00:03:54.624 CC module/blobfs/bdev/blobfs_bdev.o 00:03:54.624 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.624 LIB libspdk_keyring_linux.a 00:03:54.625 CC module/bdev/error/vbdev_error_rpc.o 00:03:54.625 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.625 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:54.625 SO libspdk_keyring_linux.so.1.0 00:03:54.625 CC module/bdev/gpt/gpt.o 00:03:54.954 LIB libspdk_fsdev_aio.a 00:03:54.954 SYMLINK libspdk_keyring_linux.so 00:03:54.954 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.954 SO libspdk_fsdev_aio.so.1.0 00:03:54.954 LIB libspdk_sock_posix.a 00:03:54.954 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.954 LIB libspdk_blobfs_bdev.a 00:03:54.954 SO libspdk_sock_posix.so.6.0 00:03:54.954 SO libspdk_blobfs_bdev.so.6.0 00:03:54.954 SYMLINK libspdk_fsdev_aio.so 00:03:54.954 LIB libspdk_bdev_error.a 00:03:54.954 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:54.954 SO libspdk_bdev_error.so.6.0 00:03:54.954 SYMLINK libspdk_sock_posix.so 00:03:54.954 SYMLINK libspdk_blobfs_bdev.so 00:03:54.954 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.954 CC module/bdev/malloc/bdev_malloc.o 00:03:55.214 SYMLINK libspdk_bdev_error.so 00:03:55.214 CC module/bdev/null/bdev_null.o 00:03:55.214 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.214 LIB libspdk_bdev_gpt.a 00:03:55.214 CC module/bdev/nvme/bdev_nvme.o 00:03:55.214 SO libspdk_bdev_gpt.so.6.0 00:03:55.214 CC module/bdev/passthru/vbdev_passthru.o 00:03:55.214 LIB libspdk_bdev_delay.a 00:03:55.214 CC module/bdev/raid/bdev_raid.o 00:03:55.214 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:55.214 SO libspdk_bdev_delay.so.6.0 00:03:55.214 SYMLINK libspdk_bdev_gpt.so 00:03:55.214 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.473 SYMLINK libspdk_bdev_delay.so 00:03:55.473 CC module/bdev/nvme/nvme_rpc.o 00:03:55.473 CC module/bdev/nvme/bdev_mdns_client.o 00:03:55.473 CC module/bdev/null/bdev_null_rpc.o 00:03:55.473 LIB libspdk_bdev_lvol.a 00:03:55.473 LIB libspdk_bdev_malloc.a 00:03:55.473 SO libspdk_bdev_lvol.so.6.0 00:03:55.473 SO libspdk_bdev_malloc.so.6.0 00:03:55.733 CC module/bdev/nvme/vbdev_opal.o 00:03:55.733 LIB libspdk_bdev_passthru.a 00:03:55.733 LIB libspdk_bdev_null.a 00:03:55.733 SYMLINK libspdk_bdev_lvol.so 00:03:55.733 SYMLINK libspdk_bdev_malloc.so 00:03:55.733 SO libspdk_bdev_null.so.6.0 00:03:55.733 SO libspdk_bdev_passthru.so.6.0 00:03:55.733 CC module/bdev/split/vbdev_split.o 00:03:55.733 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:55.733 SYMLINK libspdk_bdev_passthru.so 00:03:55.733 SYMLINK libspdk_bdev_null.so 00:03:55.733 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:55.991 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:55.991 CC module/bdev/aio/bdev_aio.o 00:03:55.991 CC module/bdev/ftl/bdev_ftl.o 00:03:55.991 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.991 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.249 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.249 LIB libspdk_bdev_split.a 00:03:56.249 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.249 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.249 SO libspdk_bdev_split.so.6.0 00:03:56.249 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.249 CC module/bdev/aio/bdev_aio_rpc.o 00:03:56.249 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.249 LIB libspdk_bdev_ftl.a 00:03:56.508 SYMLINK libspdk_bdev_split.so 00:03:56.508 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.508 SO libspdk_bdev_ftl.so.6.0 00:03:56.508 LIB libspdk_bdev_zone_block.a 00:03:56.508 SO libspdk_bdev_zone_block.so.6.0 00:03:56.508 SYMLINK libspdk_bdev_ftl.so 00:03:56.508 LIB libspdk_bdev_aio.a 00:03:56.508 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.508 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.508 SO libspdk_bdev_aio.so.6.0 00:03:56.508 SYMLINK libspdk_bdev_zone_block.so 00:03:56.508 CC module/bdev/raid/raid0.o 00:03:56.508 SYMLINK libspdk_bdev_aio.so 00:03:56.508 CC module/bdev/raid/raid1.o 00:03:56.508 CC module/bdev/raid/concat.o 00:03:56.766 CC module/bdev/raid/raid5f.o 00:03:56.766 LIB libspdk_bdev_iscsi.a 00:03:56.766 SO libspdk_bdev_iscsi.so.6.0 00:03:56.766 SYMLINK libspdk_bdev_iscsi.so 00:03:57.024 LIB libspdk_bdev_virtio.a 00:03:57.024 SO libspdk_bdev_virtio.so.6.0 00:03:57.024 SYMLINK libspdk_bdev_virtio.so 00:03:57.283 LIB libspdk_bdev_raid.a 00:03:57.541 SO libspdk_bdev_raid.so.6.0 00:03:57.541 SYMLINK libspdk_bdev_raid.so 00:03:58.916 LIB libspdk_bdev_nvme.a 00:03:58.916 SO libspdk_bdev_nvme.so.7.1 00:03:59.173 SYMLINK libspdk_bdev_nvme.so 00:03:59.738 CC module/event/subsystems/sock/sock.o 00:03:59.738 CC module/event/subsystems/iobuf/iobuf.o 00:03:59.738 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.738 CC module/event/subsystems/vmd/vmd.o 00:03:59.738 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:59.738 CC module/event/subsystems/keyring/keyring.o 00:03:59.738 CC module/event/subsystems/scheduler/scheduler.o 00:03:59.738 CC module/event/subsystems/fsdev/fsdev.o 00:03:59.738 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:59.996 LIB libspdk_event_scheduler.a 00:03:59.996 SO libspdk_event_scheduler.so.4.0 00:03:59.996 LIB libspdk_event_fsdev.a 00:03:59.996 LIB libspdk_event_keyring.a 00:03:59.996 LIB libspdk_event_sock.a 00:03:59.996 LIB libspdk_event_vhost_blk.a 00:03:59.996 SO libspdk_event_fsdev.so.1.0 00:03:59.996 LIB libspdk_event_vmd.a 00:03:59.996 SO libspdk_event_keyring.so.1.0 00:03:59.996 SO libspdk_event_sock.so.5.0 00:03:59.996 SO libspdk_event_vhost_blk.so.3.0 00:03:59.996 SYMLINK libspdk_event_scheduler.so 00:03:59.996 LIB libspdk_event_iobuf.a 00:03:59.996 SYMLINK libspdk_event_fsdev.so 00:03:59.996 SO libspdk_event_vmd.so.6.0 00:03:59.996 SO libspdk_event_iobuf.so.3.0 00:03:59.996 SYMLINK libspdk_event_keyring.so 00:03:59.996 SYMLINK libspdk_event_vhost_blk.so 00:03:59.996 SYMLINK libspdk_event_sock.so 00:03:59.996 SYMLINK libspdk_event_vmd.so 00:03:59.996 SYMLINK libspdk_event_iobuf.so 00:04:00.566 CC module/event/subsystems/accel/accel.o 00:04:00.566 LIB libspdk_event_accel.a 00:04:00.566 SO libspdk_event_accel.so.6.0 00:04:00.566 SYMLINK libspdk_event_accel.so 00:04:01.132 CC module/event/subsystems/bdev/bdev.o 00:04:01.132 LIB libspdk_event_bdev.a 00:04:01.132 SO libspdk_event_bdev.so.6.0 00:04:01.389 SYMLINK libspdk_event_bdev.so 00:04:01.648 CC module/event/subsystems/nbd/nbd.o 00:04:01.649 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.649 CC module/event/subsystems/ublk/ublk.o 00:04:01.649 CC module/event/subsystems/scsi/scsi.o 00:04:01.649 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.649 LIB libspdk_event_nbd.a 00:04:01.649 LIB libspdk_event_ublk.a 00:04:01.649 LIB libspdk_event_scsi.a 00:04:01.649 SO libspdk_event_nbd.so.6.0 00:04:01.649 SO libspdk_event_ublk.so.3.0 00:04:01.649 SO libspdk_event_scsi.so.6.0 00:04:01.907 SYMLINK libspdk_event_scsi.so 00:04:01.907 SYMLINK libspdk_event_nbd.so 00:04:01.907 SYMLINK libspdk_event_ublk.so 00:04:01.907 LIB libspdk_event_nvmf.a 00:04:01.907 SO libspdk_event_nvmf.so.6.0 00:04:01.907 SYMLINK libspdk_event_nvmf.so 00:04:01.907 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.907 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:02.166 LIB libspdk_event_vhost_scsi.a 00:04:02.166 SO libspdk_event_vhost_scsi.so.3.0 00:04:02.166 LIB libspdk_event_iscsi.a 00:04:02.166 SO libspdk_event_iscsi.so.6.0 00:04:02.166 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.423 SYMLINK libspdk_event_iscsi.so 00:04:02.423 SO libspdk.so.6.0 00:04:02.423 SYMLINK libspdk.so 00:04:02.682 CC test/rpc_client/rpc_client_test.o 00:04:02.682 TEST_HEADER include/spdk/accel.h 00:04:02.682 TEST_HEADER include/spdk/accel_module.h 00:04:02.682 CC app/trace_record/trace_record.o 00:04:02.682 TEST_HEADER include/spdk/assert.h 00:04:02.682 TEST_HEADER include/spdk/barrier.h 00:04:02.682 TEST_HEADER include/spdk/base64.h 00:04:02.682 TEST_HEADER include/spdk/bdev.h 00:04:02.682 CXX app/trace/trace.o 00:04:02.682 TEST_HEADER include/spdk/bdev_module.h 00:04:02.682 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.682 TEST_HEADER include/spdk/bit_array.h 00:04:02.682 TEST_HEADER include/spdk/bit_pool.h 00:04:02.682 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.682 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.682 TEST_HEADER include/spdk/blobfs.h 00:04:02.682 TEST_HEADER include/spdk/blob.h 00:04:02.682 TEST_HEADER include/spdk/conf.h 00:04:02.682 TEST_HEADER include/spdk/config.h 00:04:02.682 TEST_HEADER include/spdk/cpuset.h 00:04:02.682 TEST_HEADER include/spdk/crc16.h 00:04:02.682 TEST_HEADER include/spdk/crc32.h 00:04:02.682 TEST_HEADER include/spdk/crc64.h 00:04:02.682 TEST_HEADER include/spdk/dif.h 00:04:02.682 TEST_HEADER include/spdk/dma.h 00:04:02.682 TEST_HEADER include/spdk/endian.h 00:04:02.682 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.682 TEST_HEADER include/spdk/env.h 00:04:02.682 TEST_HEADER include/spdk/event.h 00:04:02.682 TEST_HEADER include/spdk/fd_group.h 00:04:02.940 TEST_HEADER include/spdk/fd.h 00:04:02.940 TEST_HEADER include/spdk/file.h 00:04:02.940 TEST_HEADER include/spdk/fsdev.h 00:04:02.940 TEST_HEADER include/spdk/fsdev_module.h 00:04:02.940 TEST_HEADER include/spdk/ftl.h 00:04:02.940 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:02.940 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.940 TEST_HEADER include/spdk/hexlify.h 00:04:02.940 TEST_HEADER include/spdk/histogram_data.h 00:04:02.940 TEST_HEADER include/spdk/idxd.h 00:04:02.940 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.940 TEST_HEADER include/spdk/init.h 00:04:02.940 TEST_HEADER include/spdk/ioat.h 00:04:02.940 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.940 CC examples/ioat/perf/perf.o 00:04:02.940 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.940 TEST_HEADER include/spdk/json.h 00:04:02.940 CC examples/util/zipf/zipf.o 00:04:02.940 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.940 TEST_HEADER include/spdk/keyring.h 00:04:02.940 TEST_HEADER include/spdk/keyring_module.h 00:04:02.940 TEST_HEADER include/spdk/likely.h 00:04:02.940 TEST_HEADER include/spdk/log.h 00:04:02.940 TEST_HEADER include/spdk/lvol.h 00:04:02.940 CC test/thread/poller_perf/poller_perf.o 00:04:02.940 TEST_HEADER include/spdk/md5.h 00:04:02.940 TEST_HEADER include/spdk/memory.h 00:04:02.940 TEST_HEADER include/spdk/mmio.h 00:04:02.940 TEST_HEADER include/spdk/nbd.h 00:04:02.940 TEST_HEADER include/spdk/net.h 00:04:02.940 TEST_HEADER include/spdk/notify.h 00:04:02.940 TEST_HEADER include/spdk/nvme.h 00:04:02.940 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.940 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.940 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.940 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.940 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.940 CC test/dma/test_dma/test_dma.o 00:04:02.940 CC test/app/bdev_svc/bdev_svc.o 00:04:02.940 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.940 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.940 TEST_HEADER include/spdk/nvmf.h 00:04:02.940 CC test/env/mem_callbacks/mem_callbacks.o 00:04:02.940 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.940 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.940 TEST_HEADER include/spdk/opal.h 00:04:02.940 TEST_HEADER include/spdk/opal_spec.h 00:04:02.940 TEST_HEADER include/spdk/pci_ids.h 00:04:02.940 TEST_HEADER include/spdk/pipe.h 00:04:02.940 TEST_HEADER include/spdk/queue.h 00:04:02.940 TEST_HEADER include/spdk/reduce.h 00:04:02.940 TEST_HEADER include/spdk/rpc.h 00:04:02.940 TEST_HEADER include/spdk/scheduler.h 00:04:02.940 TEST_HEADER include/spdk/scsi.h 00:04:02.940 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.940 TEST_HEADER include/spdk/sock.h 00:04:02.940 TEST_HEADER include/spdk/stdinc.h 00:04:02.940 TEST_HEADER include/spdk/string.h 00:04:02.940 TEST_HEADER include/spdk/thread.h 00:04:02.940 TEST_HEADER include/spdk/trace.h 00:04:02.940 TEST_HEADER include/spdk/trace_parser.h 00:04:02.940 TEST_HEADER include/spdk/tree.h 00:04:02.940 TEST_HEADER include/spdk/ublk.h 00:04:02.940 TEST_HEADER include/spdk/util.h 00:04:02.940 TEST_HEADER include/spdk/uuid.h 00:04:02.940 LINK rpc_client_test 00:04:02.940 TEST_HEADER include/spdk/version.h 00:04:02.940 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.940 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.940 TEST_HEADER include/spdk/vhost.h 00:04:02.940 TEST_HEADER include/spdk/vmd.h 00:04:02.940 TEST_HEADER include/spdk/xor.h 00:04:02.940 TEST_HEADER include/spdk/zipf.h 00:04:02.940 LINK zipf 00:04:02.940 CXX test/cpp_headers/accel.o 00:04:03.199 LINK spdk_trace_record 00:04:03.199 LINK poller_perf 00:04:03.199 LINK ioat_perf 00:04:03.199 CXX test/cpp_headers/accel_module.o 00:04:03.199 LINK bdev_svc 00:04:03.199 CXX test/cpp_headers/assert.o 00:04:03.199 LINK spdk_trace 00:04:03.457 CXX test/cpp_headers/barrier.o 00:04:03.457 CC app/nvmf_tgt/nvmf_main.o 00:04:03.457 CC test/env/vtophys/vtophys.o 00:04:03.457 CC examples/ioat/verify/verify.o 00:04:03.457 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.457 CC test/app/histogram_perf/histogram_perf.o 00:04:03.715 CXX test/cpp_headers/base64.o 00:04:03.715 CC test/event/event_perf/event_perf.o 00:04:03.715 LINK vtophys 00:04:03.715 LINK test_dma 00:04:03.715 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.715 LINK nvmf_tgt 00:04:03.715 LINK mem_callbacks 00:04:03.715 LINK env_dpdk_post_init 00:04:03.715 LINK verify 00:04:03.715 LINK histogram_perf 00:04:03.715 CXX test/cpp_headers/bdev.o 00:04:03.715 LINK event_perf 00:04:03.715 CXX test/cpp_headers/bdev_module.o 00:04:03.973 CXX test/cpp_headers/bdev_zone.o 00:04:03.973 CC test/env/memory/memory_ut.o 00:04:03.973 CC test/env/pci/pci_ut.o 00:04:03.973 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:03.973 CC app/iscsi_tgt/iscsi_tgt.o 00:04:03.973 CC test/event/reactor/reactor.o 00:04:03.973 CXX test/cpp_headers/bit_array.o 00:04:04.230 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:04.231 LINK nvme_fuzz 00:04:04.231 CC test/accel/dif/dif.o 00:04:04.231 LINK interrupt_tgt 00:04:04.231 LINK reactor 00:04:04.231 LINK iscsi_tgt 00:04:04.231 CC test/blobfs/mkfs/mkfs.o 00:04:04.231 CXX test/cpp_headers/bit_pool.o 00:04:04.231 CXX test/cpp_headers/blob_bdev.o 00:04:04.489 CC test/event/reactor_perf/reactor_perf.o 00:04:04.489 LINK mkfs 00:04:04.489 LINK pci_ut 00:04:04.489 CXX test/cpp_headers/blobfs_bdev.o 00:04:04.489 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:04.489 CC examples/thread/thread/thread_ex.o 00:04:04.746 CC app/spdk_tgt/spdk_tgt.o 00:04:04.746 LINK reactor_perf 00:04:04.747 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:04.747 CXX test/cpp_headers/blobfs.o 00:04:05.004 LINK spdk_tgt 00:04:05.004 LINK thread 00:04:05.004 CXX test/cpp_headers/blob.o 00:04:05.004 CC test/event/app_repeat/app_repeat.o 00:04:05.004 CC test/nvme/aer/aer.o 00:04:05.004 CC test/lvol/esnap/esnap.o 00:04:05.004 LINK dif 00:04:05.004 LINK app_repeat 00:04:05.004 CXX test/cpp_headers/conf.o 00:04:05.263 CC app/spdk_lspci/spdk_lspci.o 00:04:05.263 LINK vhost_fuzz 00:04:05.263 LINK memory_ut 00:04:05.263 CXX test/cpp_headers/config.o 00:04:05.263 CC examples/sock/hello_world/hello_sock.o 00:04:05.263 LINK aer 00:04:05.263 CXX test/cpp_headers/cpuset.o 00:04:05.522 LINK spdk_lspci 00:04:05.522 CC test/event/scheduler/scheduler.o 00:04:05.522 CC test/nvme/reset/reset.o 00:04:05.522 CC test/nvme/sgl/sgl.o 00:04:05.522 CXX test/cpp_headers/crc16.o 00:04:05.522 CC test/nvme/e2edp/nvme_dp.o 00:04:05.522 CC test/nvme/overhead/overhead.o 00:04:05.783 LINK hello_sock 00:04:05.783 CC app/spdk_nvme_perf/perf.o 00:04:05.783 LINK scheduler 00:04:05.783 CXX test/cpp_headers/crc32.o 00:04:05.783 LINK reset 00:04:06.053 LINK sgl 00:04:06.053 LINK nvme_dp 00:04:06.053 CXX test/cpp_headers/crc64.o 00:04:06.053 LINK overhead 00:04:06.053 CC examples/vmd/lsvmd/lsvmd.o 00:04:06.053 CC examples/vmd/led/led.o 00:04:06.053 CC app/spdk_nvme_identify/identify.o 00:04:06.311 CC test/nvme/err_injection/err_injection.o 00:04:06.311 CXX test/cpp_headers/dif.o 00:04:06.311 LINK lsvmd 00:04:06.311 CC test/nvme/startup/startup.o 00:04:06.311 LINK led 00:04:06.311 CC test/nvme/reserve/reserve.o 00:04:06.312 LINK err_injection 00:04:06.312 LINK iscsi_fuzz 00:04:06.312 CXX test/cpp_headers/dma.o 00:04:06.569 LINK startup 00:04:06.569 CXX test/cpp_headers/endian.o 00:04:06.569 CC test/bdev/bdevio/bdevio.o 00:04:06.569 LINK reserve 00:04:06.569 CXX test/cpp_headers/env_dpdk.o 00:04:06.569 CC examples/idxd/perf/perf.o 00:04:06.569 CXX test/cpp_headers/env.o 00:04:06.827 CC test/app/jsoncat/jsoncat.o 00:04:06.827 LINK spdk_nvme_perf 00:04:06.827 CXX test/cpp_headers/event.o 00:04:06.827 CC test/nvme/simple_copy/simple_copy.o 00:04:07.085 CC test/nvme/boot_partition/boot_partition.o 00:04:07.085 LINK jsoncat 00:04:07.085 CC test/nvme/connect_stress/connect_stress.o 00:04:07.085 LINK idxd_perf 00:04:07.085 CXX test/cpp_headers/fd_group.o 00:04:07.085 LINK bdevio 00:04:07.085 CC app/spdk_nvme_discover/discovery_aer.o 00:04:07.085 LINK boot_partition 00:04:07.085 LINK connect_stress 00:04:07.343 CC test/app/stub/stub.o 00:04:07.343 CXX test/cpp_headers/fd.o 00:04:07.343 LINK simple_copy 00:04:07.343 CXX test/cpp_headers/file.o 00:04:07.343 LINK spdk_nvme_identify 00:04:07.343 CXX test/cpp_headers/fsdev.o 00:04:07.343 CXX test/cpp_headers/fsdev_module.o 00:04:07.343 LINK spdk_nvme_discover 00:04:07.343 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:07.343 LINK stub 00:04:07.601 CXX test/cpp_headers/ftl.o 00:04:07.601 CC test/nvme/compliance/nvme_compliance.o 00:04:07.601 CXX test/cpp_headers/fuse_dispatcher.o 00:04:07.601 CC test/nvme/fused_ordering/fused_ordering.o 00:04:07.601 CC app/spdk_top/spdk_top.o 00:04:07.601 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:07.601 CC app/vhost/vhost.o 00:04:07.858 CXX test/cpp_headers/gpt_spec.o 00:04:07.858 CXX test/cpp_headers/hexlify.o 00:04:07.858 CC examples/accel/perf/accel_perf.o 00:04:07.858 LINK hello_fsdev 00:04:07.858 LINK fused_ordering 00:04:07.858 LINK doorbell_aers 00:04:07.858 LINK nvme_compliance 00:04:07.858 CXX test/cpp_headers/histogram_data.o 00:04:08.115 LINK vhost 00:04:08.115 CXX test/cpp_headers/idxd.o 00:04:08.115 CXX test/cpp_headers/idxd_spec.o 00:04:08.115 CC app/spdk_dd/spdk_dd.o 00:04:08.115 CXX test/cpp_headers/init.o 00:04:08.115 CXX test/cpp_headers/ioat.o 00:04:08.115 CC test/nvme/fdp/fdp.o 00:04:08.372 CXX test/cpp_headers/ioat_spec.o 00:04:08.372 CC examples/blob/hello_world/hello_blob.o 00:04:08.372 CC examples/blob/cli/blobcli.o 00:04:08.372 LINK accel_perf 00:04:08.372 CC app/fio/nvme/fio_plugin.o 00:04:08.372 LINK spdk_dd 00:04:08.630 CC app/fio/bdev/fio_plugin.o 00:04:08.630 CXX test/cpp_headers/iscsi_spec.o 00:04:08.630 CXX test/cpp_headers/json.o 00:04:08.630 LINK hello_blob 00:04:08.630 LINK fdp 00:04:08.630 CXX test/cpp_headers/jsonrpc.o 00:04:08.887 LINK spdk_top 00:04:08.888 CXX test/cpp_headers/keyring.o 00:04:08.888 CC test/nvme/cuse/cuse.o 00:04:08.888 CXX test/cpp_headers/keyring_module.o 00:04:08.888 CXX test/cpp_headers/likely.o 00:04:09.145 CC examples/nvme/reconnect/reconnect.o 00:04:09.145 LINK blobcli 00:04:09.145 CC examples/nvme/hello_world/hello_world.o 00:04:09.145 CXX test/cpp_headers/log.o 00:04:09.145 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:09.145 LINK spdk_nvme 00:04:09.145 LINK spdk_bdev 00:04:09.145 CXX test/cpp_headers/lvol.o 00:04:09.402 CC examples/bdev/hello_world/hello_bdev.o 00:04:09.402 LINK hello_world 00:04:09.402 CC examples/bdev/bdevperf/bdevperf.o 00:04:09.402 CC examples/nvme/arbitration/arbitration.o 00:04:09.402 CC examples/nvme/hotplug/hotplug.o 00:04:09.402 CXX test/cpp_headers/md5.o 00:04:09.402 LINK reconnect 00:04:09.660 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:09.660 LINK hello_bdev 00:04:09.660 CXX test/cpp_headers/memory.o 00:04:09.660 LINK hotplug 00:04:09.660 CC examples/nvme/abort/abort.o 00:04:09.660 LINK nvme_manage 00:04:09.918 LINK arbitration 00:04:09.918 LINK cmb_copy 00:04:09.918 CXX test/cpp_headers/mmio.o 00:04:09.918 CXX test/cpp_headers/nbd.o 00:04:09.918 CXX test/cpp_headers/net.o 00:04:09.918 CXX test/cpp_headers/notify.o 00:04:09.918 CXX test/cpp_headers/nvme.o 00:04:09.918 CXX test/cpp_headers/nvme_intel.o 00:04:09.918 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:10.225 CXX test/cpp_headers/nvme_ocssd.o 00:04:10.225 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:10.225 CXX test/cpp_headers/nvme_spec.o 00:04:10.225 CXX test/cpp_headers/nvme_zns.o 00:04:10.225 LINK abort 00:04:10.225 CXX test/cpp_headers/nvmf_cmd.o 00:04:10.225 LINK pmr_persistence 00:04:10.225 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:10.484 CXX test/cpp_headers/nvmf.o 00:04:10.484 CXX test/cpp_headers/nvmf_spec.o 00:04:10.484 CXX test/cpp_headers/nvmf_transport.o 00:04:10.484 LINK bdevperf 00:04:10.484 CXX test/cpp_headers/opal.o 00:04:10.484 CXX test/cpp_headers/opal_spec.o 00:04:10.484 CXX test/cpp_headers/pci_ids.o 00:04:10.484 LINK cuse 00:04:10.484 CXX test/cpp_headers/pipe.o 00:04:10.743 CXX test/cpp_headers/queue.o 00:04:10.743 CXX test/cpp_headers/reduce.o 00:04:10.743 CXX test/cpp_headers/rpc.o 00:04:10.743 CXX test/cpp_headers/scheduler.o 00:04:10.743 CXX test/cpp_headers/scsi.o 00:04:10.743 CXX test/cpp_headers/scsi_spec.o 00:04:10.743 CXX test/cpp_headers/sock.o 00:04:10.743 CXX test/cpp_headers/stdinc.o 00:04:10.743 CXX test/cpp_headers/string.o 00:04:10.743 CXX test/cpp_headers/thread.o 00:04:10.743 CXX test/cpp_headers/trace.o 00:04:10.743 CXX test/cpp_headers/trace_parser.o 00:04:10.743 CXX test/cpp_headers/tree.o 00:04:11.001 CXX test/cpp_headers/ublk.o 00:04:11.001 CXX test/cpp_headers/util.o 00:04:11.001 CXX test/cpp_headers/uuid.o 00:04:11.001 CC examples/nvmf/nvmf/nvmf.o 00:04:11.001 CXX test/cpp_headers/version.o 00:04:11.001 CXX test/cpp_headers/vfio_user_pci.o 00:04:11.001 CXX test/cpp_headers/vfio_user_spec.o 00:04:11.001 CXX test/cpp_headers/vhost.o 00:04:11.001 CXX test/cpp_headers/vmd.o 00:04:11.001 CXX test/cpp_headers/xor.o 00:04:11.001 CXX test/cpp_headers/zipf.o 00:04:11.259 LINK nvmf 00:04:12.706 LINK esnap 00:04:12.965 00:04:12.965 real 1m41.312s 00:04:12.965 user 9m37.985s 00:04:12.965 sys 1m51.530s 00:04:12.965 ************************************ 00:04:12.965 END TEST make 00:04:12.965 ************************************ 00:04:12.965 14:21:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:12.965 14:21:13 make -- common/autotest_common.sh@10 -- $ set +x 00:04:12.965 14:21:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:12.965 14:21:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:12.965 14:21:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:12.965 14:21:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.965 14:21:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:12.965 14:21:13 -- pm/common@44 -- $ pid=5303 00:04:12.965 14:21:13 -- pm/common@50 -- $ kill -TERM 5303 00:04:12.965 14:21:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.965 14:21:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:12.965 14:21:13 -- pm/common@44 -- $ pid=5304 00:04:12.965 14:21:13 -- pm/common@50 -- $ kill -TERM 5304 00:04:12.965 14:21:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:12.965 14:21:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:12.965 14:21:13 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:12.965 14:21:13 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:12.965 14:21:13 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:13.225 14:21:14 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:13.226 14:21:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.226 14:21:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.226 14:21:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.226 14:21:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.226 14:21:14 -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.226 14:21:14 -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.226 14:21:14 -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.226 14:21:14 -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.226 14:21:14 -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.226 14:21:14 -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.226 14:21:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.226 14:21:14 -- scripts/common.sh@344 -- # case "$op" in 00:04:13.226 14:21:14 -- scripts/common.sh@345 -- # : 1 00:04:13.226 14:21:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.226 14:21:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.226 14:21:14 -- scripts/common.sh@365 -- # decimal 1 00:04:13.226 14:21:14 -- scripts/common.sh@353 -- # local d=1 00:04:13.226 14:21:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.226 14:21:14 -- scripts/common.sh@355 -- # echo 1 00:04:13.226 14:21:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.226 14:21:14 -- scripts/common.sh@366 -- # decimal 2 00:04:13.226 14:21:14 -- scripts/common.sh@353 -- # local d=2 00:04:13.226 14:21:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.226 14:21:14 -- scripts/common.sh@355 -- # echo 2 00:04:13.226 14:21:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.226 14:21:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.226 14:21:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.226 14:21:14 -- scripts/common.sh@368 -- # return 0 00:04:13.226 14:21:14 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.226 14:21:14 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.226 --rc genhtml_branch_coverage=1 00:04:13.226 --rc genhtml_function_coverage=1 00:04:13.226 --rc genhtml_legend=1 00:04:13.226 --rc geninfo_all_blocks=1 00:04:13.226 --rc geninfo_unexecuted_blocks=1 00:04:13.226 00:04:13.226 ' 00:04:13.226 14:21:14 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.226 --rc genhtml_branch_coverage=1 00:04:13.226 --rc genhtml_function_coverage=1 00:04:13.226 --rc genhtml_legend=1 00:04:13.226 --rc geninfo_all_blocks=1 00:04:13.226 --rc geninfo_unexecuted_blocks=1 00:04:13.226 00:04:13.226 ' 00:04:13.226 14:21:14 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.226 --rc genhtml_branch_coverage=1 00:04:13.226 --rc genhtml_function_coverage=1 00:04:13.226 --rc genhtml_legend=1 00:04:13.226 --rc geninfo_all_blocks=1 00:04:13.226 --rc geninfo_unexecuted_blocks=1 00:04:13.226 00:04:13.226 ' 00:04:13.226 14:21:14 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:13.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.226 --rc genhtml_branch_coverage=1 00:04:13.226 --rc genhtml_function_coverage=1 00:04:13.226 --rc genhtml_legend=1 00:04:13.226 --rc geninfo_all_blocks=1 00:04:13.226 --rc geninfo_unexecuted_blocks=1 00:04:13.226 00:04:13.226 ' 00:04:13.226 14:21:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:13.226 14:21:14 -- nvmf/common.sh@7 -- # uname -s 00:04:13.226 14:21:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.226 14:21:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.226 14:21:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.226 14:21:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.226 14:21:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.226 14:21:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.226 14:21:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.226 14:21:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.226 14:21:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.226 14:21:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.226 14:21:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00c99eb5-4b77-4cf8-b25b-b17f9cba7a78 00:04:13.226 14:21:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00c99eb5-4b77-4cf8-b25b-b17f9cba7a78 00:04:13.226 14:21:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.226 14:21:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.226 14:21:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.226 14:21:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.226 14:21:14 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:13.226 14:21:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:13.226 14:21:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.226 14:21:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.226 14:21:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.226 14:21:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.226 14:21:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.226 14:21:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.226 14:21:14 -- paths/export.sh@5 -- # export PATH 00:04:13.226 14:21:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.226 14:21:14 -- nvmf/common.sh@51 -- # : 0 00:04:13.226 14:21:14 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:13.226 14:21:14 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:13.226 14:21:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.226 14:21:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.226 14:21:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.226 14:21:14 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:13.226 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:13.226 14:21:14 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:13.226 14:21:14 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:13.226 14:21:14 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:13.226 14:21:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:13.226 14:21:14 -- spdk/autotest.sh@32 -- # uname -s 00:04:13.226 14:21:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:13.226 14:21:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:13.226 14:21:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.226 14:21:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:13.226 14:21:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:13.226 14:21:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:13.226 14:21:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:13.226 14:21:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:13.226 14:21:14 -- spdk/autotest.sh@48 -- # udevadm_pid=54386 00:04:13.227 14:21:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:13.227 14:21:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:13.227 14:21:14 -- pm/common@17 -- # local monitor 00:04:13.227 14:21:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.227 14:21:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:13.227 14:21:14 -- pm/common@25 -- # sleep 1 00:04:13.227 14:21:14 -- pm/common@21 -- # date +%s 00:04:13.227 14:21:14 -- pm/common@21 -- # date +%s 00:04:13.227 14:21:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732112474 00:04:13.227 14:21:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732112474 00:04:13.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732112474_collect-cpu-load.pm.log 00:04:13.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732112474_collect-vmstat.pm.log 00:04:14.161 14:21:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:14.161 14:21:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:14.161 14:21:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.161 14:21:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.161 14:21:15 -- spdk/autotest.sh@59 -- # create_test_list 00:04:14.161 14:21:15 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:14.161 14:21:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.161 14:21:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:14.161 14:21:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:14.161 14:21:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:14.161 14:21:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:14.161 14:21:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:14.161 14:21:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:14.161 14:21:15 -- common/autotest_common.sh@1457 -- # uname 00:04:14.161 14:21:15 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:14.161 14:21:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:14.161 14:21:15 -- common/autotest_common.sh@1477 -- # uname 00:04:14.161 14:21:15 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:14.161 14:21:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:14.161 14:21:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:14.419 lcov: LCOV version 1.15 00:04:14.419 14:21:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:32.493 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:32.493 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:50.643 14:21:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:50.643 14:21:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.643 14:21:49 -- common/autotest_common.sh@10 -- # set +x 00:04:50.643 14:21:49 -- spdk/autotest.sh@78 -- # rm -f 00:04:50.643 14:21:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:50.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.643 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:50.643 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:50.643 14:21:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:50.643 14:21:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:50.643 14:21:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:50.643 14:21:50 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:50.643 14:21:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:50.643 14:21:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:50.643 14:21:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:50.643 14:21:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:50.643 14:21:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:50.643 14:21:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:50.643 14:21:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:50.643 14:21:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:50.643 14:21:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:50.643 14:21:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:50.643 14:21:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:50.643 14:21:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:50.643 14:21:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:50.643 14:21:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:50.643 14:21:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:50.643 14:21:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:50.643 14:21:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:50.643 14:21:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:50.643 14:21:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:50.643 14:21:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:50.643 14:21:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:50.643 14:21:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.643 14:21:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.643 14:21:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:50.643 14:21:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:50.643 14:21:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:50.643 No valid GPT data, bailing 00:04:50.643 14:21:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:50.643 14:21:50 -- scripts/common.sh@394 -- # pt= 00:04:50.643 14:21:50 -- scripts/common.sh@395 -- # return 1 00:04:50.643 14:21:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:50.643 1+0 records in 00:04:50.643 1+0 records out 00:04:50.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458012 s, 229 MB/s 00:04:50.643 14:21:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.643 14:21:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.643 14:21:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:50.643 14:21:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:50.644 14:21:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:50.644 No valid GPT data, bailing 00:04:50.644 14:21:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:50.644 14:21:50 -- scripts/common.sh@394 -- # pt= 00:04:50.644 14:21:50 -- scripts/common.sh@395 -- # return 1 00:04:50.644 14:21:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:50.644 1+0 records in 00:04:50.644 1+0 records out 00:04:50.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406298 s, 258 MB/s 00:04:50.644 14:21:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.644 14:21:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.644 14:21:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:50.644 14:21:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:50.644 14:21:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:50.644 No valid GPT data, bailing 00:04:50.644 14:21:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:50.644 14:21:50 -- scripts/common.sh@394 -- # pt= 00:04:50.644 14:21:50 -- scripts/common.sh@395 -- # return 1 00:04:50.644 14:21:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:50.644 1+0 records in 00:04:50.644 1+0 records out 00:04:50.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361153 s, 290 MB/s 00:04:50.644 14:21:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:50.644 14:21:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:50.644 14:21:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:50.644 14:21:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:50.644 14:21:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:50.644 No valid GPT data, bailing 00:04:50.644 14:21:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:50.644 14:21:50 -- scripts/common.sh@394 -- # pt= 00:04:50.644 14:21:50 -- scripts/common.sh@395 -- # return 1 00:04:50.644 14:21:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:50.644 1+0 records in 00:04:50.644 1+0 records out 00:04:50.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427896 s, 245 MB/s 00:04:50.644 14:21:50 -- spdk/autotest.sh@105 -- # sync 00:04:50.644 14:21:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:50.644 14:21:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:50.644 14:21:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:52.019 14:21:52 -- spdk/autotest.sh@111 -- # uname -s 00:04:52.019 14:21:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:52.019 14:21:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:52.019 14:21:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:52.277 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.277 Hugepages 00:04:52.277 node hugesize free / total 00:04:52.277 node0 1048576kB 0 / 0 00:04:52.277 node0 2048kB 0 / 0 00:04:52.277 00:04:52.277 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.535 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:52.536 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:52.536 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:52.536 14:21:53 -- spdk/autotest.sh@117 -- # uname -s 00:04:52.536 14:21:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:52.536 14:21:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:52.536 14:21:53 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.358 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.358 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.358 14:21:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:54.292 14:21:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:54.292 14:21:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:54.292 14:21:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.292 14:21:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:54.292 14:21:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:54.292 14:21:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:54.292 14:21:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.292 14:21:55 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.292 14:21:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:54.550 14:21:55 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:54.550 14:21:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:54.550 14:21:55 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:54.808 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.808 Waiting for block devices as requested 00:04:54.808 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:54.808 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.066 14:21:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:55.066 14:21:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:55.066 14:21:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:55.066 14:21:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:55.066 14:21:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:55.066 14:21:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:55.066 14:21:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:55.066 14:21:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:55.066 14:21:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:55.066 14:21:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:55.066 14:21:55 -- common/autotest_common.sh@1543 -- # continue 00:04:55.066 14:21:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:55.066 14:21:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:55.066 14:21:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:55.066 14:21:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:55.066 14:21:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:55.066 14:21:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:55.066 14:21:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:55.066 14:21:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:55.066 14:21:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:55.066 14:21:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:55.066 14:21:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:55.066 14:21:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:55.066 14:21:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:55.066 14:21:56 -- common/autotest_common.sh@1543 -- # continue 00:04:55.066 14:21:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:55.066 14:21:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.066 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.066 14:21:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:55.066 14:21:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.066 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.066 14:21:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.889 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.889 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.889 14:21:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:55.889 14:21:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.889 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.889 14:21:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:55.890 14:21:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:55.890 14:21:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:55.890 14:21:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:55.890 14:21:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:55.890 14:21:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:55.890 14:21:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:55.890 14:21:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:55.890 14:21:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:55.890 14:21:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:55.890 14:21:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.890 14:21:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.890 14:21:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:55.890 14:21:56 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:55.890 14:21:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:55.890 14:21:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:55.890 14:21:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:56.148 14:21:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:56.148 14:21:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:56.148 14:21:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:56.148 14:21:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:56.148 14:21:56 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:56.148 14:21:56 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:56.148 14:21:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:56.148 14:21:56 -- common/autotest_common.sh@1572 -- # return 0 00:04:56.148 14:21:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:56.148 14:21:56 -- common/autotest_common.sh@1580 -- # return 0 00:04:56.148 14:21:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:56.148 14:21:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:56.148 14:21:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:56.148 14:21:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:56.148 14:21:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:56.148 14:21:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.148 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.148 14:21:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:56.148 14:21:56 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:56.148 14:21:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.148 14:21:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.148 14:21:56 -- common/autotest_common.sh@10 -- # set +x 00:04:56.148 ************************************ 00:04:56.148 START TEST env 00:04:56.148 ************************************ 00:04:56.149 14:21:56 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:56.149 * Looking for test storage... 00:04:56.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.149 14:21:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.149 14:21:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.149 14:21:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.149 14:21:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.149 14:21:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.149 14:21:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.149 14:21:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.149 14:21:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.149 14:21:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.149 14:21:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.149 14:21:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.149 14:21:57 env -- scripts/common.sh@344 -- # case "$op" in 00:04:56.149 14:21:57 env -- scripts/common.sh@345 -- # : 1 00:04:56.149 14:21:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.149 14:21:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.149 14:21:57 env -- scripts/common.sh@365 -- # decimal 1 00:04:56.149 14:21:57 env -- scripts/common.sh@353 -- # local d=1 00:04:56.149 14:21:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.149 14:21:57 env -- scripts/common.sh@355 -- # echo 1 00:04:56.149 14:21:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.149 14:21:57 env -- scripts/common.sh@366 -- # decimal 2 00:04:56.149 14:21:57 env -- scripts/common.sh@353 -- # local d=2 00:04:56.149 14:21:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.149 14:21:57 env -- scripts/common.sh@355 -- # echo 2 00:04:56.149 14:21:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.149 14:21:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.149 14:21:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.149 14:21:57 env -- scripts/common.sh@368 -- # return 0 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.149 --rc genhtml_branch_coverage=1 00:04:56.149 --rc genhtml_function_coverage=1 00:04:56.149 --rc genhtml_legend=1 00:04:56.149 --rc geninfo_all_blocks=1 00:04:56.149 --rc geninfo_unexecuted_blocks=1 00:04:56.149 00:04:56.149 ' 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.149 --rc genhtml_branch_coverage=1 00:04:56.149 --rc genhtml_function_coverage=1 00:04:56.149 --rc genhtml_legend=1 00:04:56.149 --rc geninfo_all_blocks=1 00:04:56.149 --rc geninfo_unexecuted_blocks=1 00:04:56.149 00:04:56.149 ' 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.149 --rc genhtml_branch_coverage=1 00:04:56.149 --rc genhtml_function_coverage=1 00:04:56.149 --rc genhtml_legend=1 00:04:56.149 --rc geninfo_all_blocks=1 00:04:56.149 --rc geninfo_unexecuted_blocks=1 00:04:56.149 00:04:56.149 ' 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.149 --rc genhtml_branch_coverage=1 00:04:56.149 --rc genhtml_function_coverage=1 00:04:56.149 --rc genhtml_legend=1 00:04:56.149 --rc geninfo_all_blocks=1 00:04:56.149 --rc geninfo_unexecuted_blocks=1 00:04:56.149 00:04:56.149 ' 00:04:56.149 14:21:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.149 14:21:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.149 14:21:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.149 ************************************ 00:04:56.149 START TEST env_memory 00:04:56.149 ************************************ 00:04:56.149 14:21:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:56.407 00:04:56.407 00:04:56.407 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.407 http://cunit.sourceforge.net/ 00:04:56.407 00:04:56.407 00:04:56.407 Suite: memory 00:04:56.407 Test: alloc and free memory map ...[2024-11-20 14:21:57.260027] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:56.407 passed 00:04:56.407 Test: mem map translation ...[2024-11-20 14:21:57.321315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:56.408 [2024-11-20 14:21:57.321438] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:56.408 [2024-11-20 14:21:57.321571] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:56.408 [2024-11-20 14:21:57.321654] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:56.408 passed 00:04:56.408 Test: mem map registration ...[2024-11-20 14:21:57.425171] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:56.408 [2024-11-20 14:21:57.425303] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:56.666 passed 00:04:56.666 Test: mem map adjacent registrations ...passed 00:04:56.666 00:04:56.666 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.666 suites 1 1 n/a 0 0 00:04:56.666 tests 4 4 4 0 0 00:04:56.666 asserts 152 152 152 0 n/a 00:04:56.666 00:04:56.666 Elapsed time = 0.345 seconds 00:04:56.666 00:04:56.666 real 0m0.384s 00:04:56.666 user 0m0.337s 00:04:56.666 sys 0m0.035s 00:04:56.666 14:21:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.666 14:21:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:56.666 ************************************ 00:04:56.666 END TEST env_memory 00:04:56.666 ************************************ 00:04:56.666 14:21:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:56.666 14:21:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.666 14:21:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.666 14:21:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.666 ************************************ 00:04:56.666 START TEST env_vtophys 00:04:56.666 ************************************ 00:04:56.666 14:21:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:56.666 EAL: lib.eal log level changed from notice to debug 00:04:56.666 EAL: Detected lcore 0 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 1 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 2 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 3 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 4 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 5 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 6 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 7 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 8 as core 0 on socket 0 00:04:56.666 EAL: Detected lcore 9 as core 0 on socket 0 00:04:56.666 EAL: Maximum logical cores by configuration: 128 00:04:56.666 EAL: Detected CPU lcores: 10 00:04:56.666 EAL: Detected NUMA nodes: 1 00:04:56.666 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:56.666 EAL: Detected shared linkage of DPDK 00:04:56.666 EAL: No shared files mode enabled, IPC will be disabled 00:04:56.923 EAL: Selected IOVA mode 'PA' 00:04:56.923 EAL: Probing VFIO support... 00:04:56.923 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:56.923 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:56.923 EAL: Ask a virtual area of 0x2e000 bytes 00:04:56.923 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:56.923 EAL: Setting up physically contiguous memory... 00:04:56.923 EAL: Setting maximum number of open files to 524288 00:04:56.924 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:56.924 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:56.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.924 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:56.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.924 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:56.924 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:56.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.924 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:56.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.924 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:56.924 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:56.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.924 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:56.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.924 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:56.924 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:56.924 EAL: Ask a virtual area of 0x61000 bytes 00:04:56.924 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:56.924 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:56.924 EAL: Ask a virtual area of 0x400000000 bytes 00:04:56.924 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:56.924 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:56.924 EAL: Hugepages will be freed exactly as allocated. 00:04:56.924 EAL: No shared files mode enabled, IPC is disabled 00:04:56.924 EAL: No shared files mode enabled, IPC is disabled 00:04:56.924 EAL: TSC frequency is ~2200000 KHz 00:04:56.924 EAL: Main lcore 0 is ready (tid=7fdc5fcbda40;cpuset=[0]) 00:04:56.924 EAL: Trying to obtain current memory policy. 00:04:56.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.924 EAL: Restoring previous memory policy: 0 00:04:56.924 EAL: request: mp_malloc_sync 00:04:56.924 EAL: No shared files mode enabled, IPC is disabled 00:04:56.924 EAL: Heap on socket 0 was expanded by 2MB 00:04:56.924 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:56.924 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:56.924 EAL: Mem event callback 'spdk:(nil)' registered 00:04:56.924 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:56.924 00:04:56.924 00:04:56.924 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.924 http://cunit.sourceforge.net/ 00:04:56.924 00:04:56.924 00:04:56.924 Suite: components_suite 00:04:57.491 Test: vtophys_malloc_test ...passed 00:04:57.491 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:57.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.491 EAL: Restoring previous memory policy: 4 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was expanded by 4MB 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was shrunk by 4MB 00:04:57.491 EAL: Trying to obtain current memory policy. 00:04:57.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.491 EAL: Restoring previous memory policy: 4 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was expanded by 6MB 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was shrunk by 6MB 00:04:57.491 EAL: Trying to obtain current memory policy. 00:04:57.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.491 EAL: Restoring previous memory policy: 4 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was expanded by 10MB 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was shrunk by 10MB 00:04:57.491 EAL: Trying to obtain current memory policy. 00:04:57.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.491 EAL: Restoring previous memory policy: 4 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was expanded by 18MB 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was shrunk by 18MB 00:04:57.491 EAL: Trying to obtain current memory policy. 00:04:57.491 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.491 EAL: Restoring previous memory policy: 4 00:04:57.491 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.491 EAL: request: mp_malloc_sync 00:04:57.491 EAL: No shared files mode enabled, IPC is disabled 00:04:57.491 EAL: Heap on socket 0 was expanded by 34MB 00:04:57.749 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.750 EAL: request: mp_malloc_sync 00:04:57.750 EAL: No shared files mode enabled, IPC is disabled 00:04:57.750 EAL: Heap on socket 0 was shrunk by 34MB 00:04:57.750 EAL: Trying to obtain current memory policy. 00:04:57.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.750 EAL: Restoring previous memory policy: 4 00:04:57.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.750 EAL: request: mp_malloc_sync 00:04:57.750 EAL: No shared files mode enabled, IPC is disabled 00:04:57.750 EAL: Heap on socket 0 was expanded by 66MB 00:04:57.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.750 EAL: request: mp_malloc_sync 00:04:57.750 EAL: No shared files mode enabled, IPC is disabled 00:04:57.750 EAL: Heap on socket 0 was shrunk by 66MB 00:04:58.008 EAL: Trying to obtain current memory policy. 00:04:58.008 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.008 EAL: Restoring previous memory policy: 4 00:04:58.008 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.008 EAL: request: mp_malloc_sync 00:04:58.008 EAL: No shared files mode enabled, IPC is disabled 00:04:58.008 EAL: Heap on socket 0 was expanded by 130MB 00:04:58.267 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.267 EAL: request: mp_malloc_sync 00:04:58.267 EAL: No shared files mode enabled, IPC is disabled 00:04:58.267 EAL: Heap on socket 0 was shrunk by 130MB 00:04:58.525 EAL: Trying to obtain current memory policy. 00:04:58.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.525 EAL: Restoring previous memory policy: 4 00:04:58.525 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.525 EAL: request: mp_malloc_sync 00:04:58.525 EAL: No shared files mode enabled, IPC is disabled 00:04:58.525 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.091 EAL: request: mp_malloc_sync 00:04:59.091 EAL: No shared files mode enabled, IPC is disabled 00:04:59.091 EAL: Heap on socket 0 was shrunk by 258MB 00:04:59.348 EAL: Trying to obtain current memory policy. 00:04:59.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.607 EAL: Restoring previous memory policy: 4 00:04:59.607 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.607 EAL: request: mp_malloc_sync 00:04:59.607 EAL: No shared files mode enabled, IPC is disabled 00:04:59.607 EAL: Heap on socket 0 was expanded by 514MB 00:05:00.542 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.542 EAL: request: mp_malloc_sync 00:05:00.542 EAL: No shared files mode enabled, IPC is disabled 00:05:00.542 EAL: Heap on socket 0 was shrunk by 514MB 00:05:01.109 EAL: Trying to obtain current memory policy. 00:05:01.109 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.677 EAL: Restoring previous memory policy: 4 00:05:01.677 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.677 EAL: request: mp_malloc_sync 00:05:01.677 EAL: No shared files mode enabled, IPC is disabled 00:05:01.677 EAL: Heap on socket 0 was expanded by 1026MB 00:05:03.053 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.311 EAL: request: mp_malloc_sync 00:05:03.311 EAL: No shared files mode enabled, IPC is disabled 00:05:03.311 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.214 passed 00:05:05.214 00:05:05.214 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.214 suites 1 1 n/a 0 0 00:05:05.214 tests 2 2 2 0 0 00:05:05.214 asserts 5628 5628 5628 0 n/a 00:05:05.214 00:05:05.214 Elapsed time = 7.853 seconds 00:05:05.214 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.214 EAL: request: mp_malloc_sync 00:05:05.214 EAL: No shared files mode enabled, IPC is disabled 00:05:05.214 EAL: Heap on socket 0 was shrunk by 2MB 00:05:05.214 EAL: No shared files mode enabled, IPC is disabled 00:05:05.214 EAL: No shared files mode enabled, IPC is disabled 00:05:05.214 EAL: No shared files mode enabled, IPC is disabled 00:05:05.214 00:05:05.214 real 0m8.207s 00:05:05.214 user 0m6.881s 00:05:05.214 sys 0m1.163s 00:05:05.214 14:22:05 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.214 14:22:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:05.214 ************************************ 00:05:05.214 END TEST env_vtophys 00:05:05.214 ************************************ 00:05:05.214 14:22:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:05.214 14:22:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.214 14:22:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.214 14:22:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.214 ************************************ 00:05:05.214 START TEST env_pci 00:05:05.214 ************************************ 00:05:05.214 14:22:05 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:05.214 00:05:05.214 00:05:05.214 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.214 http://cunit.sourceforge.net/ 00:05:05.214 00:05:05.214 00:05:05.214 Suite: pci 00:05:05.214 Test: pci_hook ...[2024-11-20 14:22:05.908743] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56736 has claimed it 00:05:05.214 passed 00:05:05.214 00:05:05.214 EAL: Cannot find device (10000:00:01.0) 00:05:05.214 EAL: Failed to attach device on primary process 00:05:05.214 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.214 suites 1 1 n/a 0 0 00:05:05.214 tests 1 1 1 0 0 00:05:05.214 asserts 25 25 25 0 n/a 00:05:05.214 00:05:05.214 Elapsed time = 0.009 seconds 00:05:05.214 00:05:05.214 real 0m0.088s 00:05:05.214 user 0m0.045s 00:05:05.214 sys 0m0.042s 00:05:05.214 14:22:05 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.214 14:22:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:05.214 ************************************ 00:05:05.214 END TEST env_pci 00:05:05.214 ************************************ 00:05:05.214 14:22:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.214 14:22:05 env -- env/env.sh@15 -- # uname 00:05:05.214 14:22:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.214 14:22:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.214 14:22:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.214 14:22:05 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:05.214 14:22:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.214 14:22:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.214 ************************************ 00:05:05.214 START TEST env_dpdk_post_init 00:05:05.214 ************************************ 00:05:05.214 14:22:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.214 EAL: Detected CPU lcores: 10 00:05:05.214 EAL: Detected NUMA nodes: 1 00:05:05.214 EAL: Detected shared linkage of DPDK 00:05:05.214 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.214 EAL: Selected IOVA mode 'PA' 00:05:05.214 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:05.214 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:05.473 Starting DPDK initialization... 00:05:05.473 Starting SPDK post initialization... 00:05:05.473 SPDK NVMe probe 00:05:05.473 Attaching to 0000:00:10.0 00:05:05.473 Attaching to 0000:00:11.0 00:05:05.473 Attached to 0000:00:10.0 00:05:05.473 Attached to 0000:00:11.0 00:05:05.473 Cleaning up... 00:05:05.473 00:05:05.473 real 0m0.315s 00:05:05.473 user 0m0.112s 00:05:05.473 sys 0m0.105s 00:05:05.473 14:22:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.473 14:22:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.473 ************************************ 00:05:05.473 END TEST env_dpdk_post_init 00:05:05.473 ************************************ 00:05:05.473 14:22:06 env -- env/env.sh@26 -- # uname 00:05:05.473 14:22:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:05.473 14:22:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.473 14:22:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.473 14:22:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.473 14:22:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.473 ************************************ 00:05:05.473 START TEST env_mem_callbacks 00:05:05.473 ************************************ 00:05:05.473 14:22:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:05.473 EAL: Detected CPU lcores: 10 00:05:05.473 EAL: Detected NUMA nodes: 1 00:05:05.473 EAL: Detected shared linkage of DPDK 00:05:05.473 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.473 EAL: Selected IOVA mode 'PA' 00:05:05.731 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:05.731 00:05:05.731 00:05:05.731 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.731 http://cunit.sourceforge.net/ 00:05:05.731 00:05:05.731 00:05:05.731 Suite: memory 00:05:05.731 Test: test ... 00:05:05.732 register 0x200000200000 2097152 00:05:05.732 malloc 3145728 00:05:05.732 register 0x200000400000 4194304 00:05:05.732 buf 0x2000004fffc0 len 3145728 PASSED 00:05:05.732 malloc 64 00:05:05.732 buf 0x2000004ffec0 len 64 PASSED 00:05:05.732 malloc 4194304 00:05:05.732 register 0x200000800000 6291456 00:05:05.732 buf 0x2000009fffc0 len 4194304 PASSED 00:05:05.732 free 0x2000004fffc0 3145728 00:05:05.732 free 0x2000004ffec0 64 00:05:05.732 unregister 0x200000400000 4194304 PASSED 00:05:05.732 free 0x2000009fffc0 4194304 00:05:05.732 unregister 0x200000800000 6291456 PASSED 00:05:05.732 malloc 8388608 00:05:05.732 register 0x200000400000 10485760 00:05:05.732 buf 0x2000005fffc0 len 8388608 PASSED 00:05:05.732 free 0x2000005fffc0 8388608 00:05:05.732 unregister 0x200000400000 10485760 PASSED 00:05:05.732 passed 00:05:05.732 00:05:05.732 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.732 suites 1 1 n/a 0 0 00:05:05.732 tests 1 1 1 0 0 00:05:05.732 asserts 15 15 15 0 n/a 00:05:05.732 00:05:05.732 Elapsed time = 0.075 seconds 00:05:05.732 00:05:05.732 real 0m0.277s 00:05:05.732 user 0m0.103s 00:05:05.732 sys 0m0.072s 00:05:05.732 14:22:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.732 14:22:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:05.732 ************************************ 00:05:05.732 END TEST env_mem_callbacks 00:05:05.732 ************************************ 00:05:05.732 00:05:05.732 real 0m9.719s 00:05:05.732 user 0m7.678s 00:05:05.732 sys 0m1.660s 00:05:05.732 14:22:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.732 14:22:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.732 ************************************ 00:05:05.732 END TEST env 00:05:05.732 ************************************ 00:05:05.732 14:22:06 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:05.732 14:22:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.732 14:22:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.732 14:22:06 -- common/autotest_common.sh@10 -- # set +x 00:05:05.732 ************************************ 00:05:05.732 START TEST rpc 00:05:05.732 ************************************ 00:05:05.732 14:22:06 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:05.991 * Looking for test storage... 00:05:05.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.991 14:22:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.991 14:22:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.991 14:22:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.991 14:22:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.991 14:22:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.991 14:22:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.991 14:22:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.991 14:22:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.991 14:22:06 rpc -- scripts/common.sh@345 -- # : 1 00:05:05.991 14:22:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.991 14:22:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.991 14:22:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.991 14:22:06 rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.991 14:22:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.991 14:22:06 rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.991 14:22:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.991 14:22:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.991 14:22:06 rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.991 14:22:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.991 14:22:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.991 14:22:06 rpc -- scripts/common.sh@368 -- # return 0 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.991 --rc genhtml_branch_coverage=1 00:05:05.991 --rc genhtml_function_coverage=1 00:05:05.991 --rc genhtml_legend=1 00:05:05.991 --rc geninfo_all_blocks=1 00:05:05.991 --rc geninfo_unexecuted_blocks=1 00:05:05.991 00:05:05.991 ' 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.991 --rc genhtml_branch_coverage=1 00:05:05.991 --rc genhtml_function_coverage=1 00:05:05.991 --rc genhtml_legend=1 00:05:05.991 --rc geninfo_all_blocks=1 00:05:05.991 --rc geninfo_unexecuted_blocks=1 00:05:05.991 00:05:05.991 ' 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.991 --rc genhtml_branch_coverage=1 00:05:05.991 --rc genhtml_function_coverage=1 00:05:05.991 --rc genhtml_legend=1 00:05:05.991 --rc geninfo_all_blocks=1 00:05:05.991 --rc geninfo_unexecuted_blocks=1 00:05:05.991 00:05:05.991 ' 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.991 --rc genhtml_branch_coverage=1 00:05:05.991 --rc genhtml_function_coverage=1 00:05:05.991 --rc genhtml_legend=1 00:05:05.991 --rc geninfo_all_blocks=1 00:05:05.991 --rc geninfo_unexecuted_blocks=1 00:05:05.991 00:05:05.991 ' 00:05:05.991 14:22:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56863 00:05:05.991 14:22:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.991 14:22:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56863 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@835 -- # '[' -z 56863 ']' 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.991 14:22:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.991 14:22:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:06.250 [2024-11-20 14:22:07.046769] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:06.250 [2024-11-20 14:22:07.046935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56863 ] 00:05:06.250 [2024-11-20 14:22:07.232998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.507 [2024-11-20 14:22:07.396499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:06.507 [2024-11-20 14:22:07.396592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56863' to capture a snapshot of events at runtime. 00:05:06.507 [2024-11-20 14:22:07.396615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:06.508 [2024-11-20 14:22:07.396652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:06.508 [2024-11-20 14:22:07.396669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56863 for offline analysis/debug. 00:05:06.508 [2024-11-20 14:22:07.398303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.444 14:22:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.444 14:22:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.444 14:22:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.444 14:22:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.444 14:22:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:07.444 14:22:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:07.444 14:22:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.444 14:22:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.444 14:22:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.444 ************************************ 00:05:07.444 START TEST rpc_integrity 00:05:07.444 ************************************ 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:07.444 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.444 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.444 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.444 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.444 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.444 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:07.444 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.444 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.764 { 00:05:07.764 "name": "Malloc0", 00:05:07.764 "aliases": [ 00:05:07.764 "f4c113c5-8f07-497f-a997-39ac1d76d154" 00:05:07.764 ], 00:05:07.764 "product_name": "Malloc disk", 00:05:07.764 "block_size": 512, 00:05:07.764 "num_blocks": 16384, 00:05:07.764 "uuid": "f4c113c5-8f07-497f-a997-39ac1d76d154", 00:05:07.764 "assigned_rate_limits": { 00:05:07.764 "rw_ios_per_sec": 0, 00:05:07.764 "rw_mbytes_per_sec": 0, 00:05:07.764 "r_mbytes_per_sec": 0, 00:05:07.764 "w_mbytes_per_sec": 0 00:05:07.764 }, 00:05:07.764 "claimed": false, 00:05:07.764 "zoned": false, 00:05:07.764 "supported_io_types": { 00:05:07.764 "read": true, 00:05:07.764 "write": true, 00:05:07.764 "unmap": true, 00:05:07.764 "flush": true, 00:05:07.764 "reset": true, 00:05:07.764 "nvme_admin": false, 00:05:07.764 "nvme_io": false, 00:05:07.764 "nvme_io_md": false, 00:05:07.764 "write_zeroes": true, 00:05:07.764 "zcopy": true, 00:05:07.764 "get_zone_info": false, 00:05:07.764 "zone_management": false, 00:05:07.764 "zone_append": false, 00:05:07.764 "compare": false, 00:05:07.764 "compare_and_write": false, 00:05:07.764 "abort": true, 00:05:07.764 "seek_hole": false, 00:05:07.764 "seek_data": false, 00:05:07.764 "copy": true, 00:05:07.764 "nvme_iov_md": false 00:05:07.764 }, 00:05:07.764 "memory_domains": [ 00:05:07.764 { 00:05:07.764 "dma_device_id": "system", 00:05:07.764 "dma_device_type": 1 00:05:07.764 }, 00:05:07.764 { 00:05:07.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.764 "dma_device_type": 2 00:05:07.764 } 00:05:07.764 ], 00:05:07.764 "driver_specific": {} 00:05:07.764 } 00:05:07.764 ]' 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.764 [2024-11-20 14:22:08.564807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:07.764 [2024-11-20 14:22:08.564925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.764 [2024-11-20 14:22:08.564973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:07.764 [2024-11-20 14:22:08.565004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.764 [2024-11-20 14:22:08.568174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.764 [2024-11-20 14:22:08.568223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.764 Passthru0 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.764 { 00:05:07.764 "name": "Malloc0", 00:05:07.764 "aliases": [ 00:05:07.764 "f4c113c5-8f07-497f-a997-39ac1d76d154" 00:05:07.764 ], 00:05:07.764 "product_name": "Malloc disk", 00:05:07.764 "block_size": 512, 00:05:07.764 "num_blocks": 16384, 00:05:07.764 "uuid": "f4c113c5-8f07-497f-a997-39ac1d76d154", 00:05:07.764 "assigned_rate_limits": { 00:05:07.764 "rw_ios_per_sec": 0, 00:05:07.764 "rw_mbytes_per_sec": 0, 00:05:07.764 "r_mbytes_per_sec": 0, 00:05:07.764 "w_mbytes_per_sec": 0 00:05:07.764 }, 00:05:07.764 "claimed": true, 00:05:07.764 "claim_type": "exclusive_write", 00:05:07.764 "zoned": false, 00:05:07.764 "supported_io_types": { 00:05:07.764 "read": true, 00:05:07.764 "write": true, 00:05:07.764 "unmap": true, 00:05:07.764 "flush": true, 00:05:07.764 "reset": true, 00:05:07.764 "nvme_admin": false, 00:05:07.764 "nvme_io": false, 00:05:07.764 "nvme_io_md": false, 00:05:07.764 "write_zeroes": true, 00:05:07.764 "zcopy": true, 00:05:07.764 "get_zone_info": false, 00:05:07.764 "zone_management": false, 00:05:07.764 "zone_append": false, 00:05:07.764 "compare": false, 00:05:07.764 "compare_and_write": false, 00:05:07.764 "abort": true, 00:05:07.764 "seek_hole": false, 00:05:07.764 "seek_data": false, 00:05:07.764 "copy": true, 00:05:07.764 "nvme_iov_md": false 00:05:07.764 }, 00:05:07.764 "memory_domains": [ 00:05:07.764 { 00:05:07.764 "dma_device_id": "system", 00:05:07.764 "dma_device_type": 1 00:05:07.764 }, 00:05:07.764 { 00:05:07.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.764 "dma_device_type": 2 00:05:07.764 } 00:05:07.764 ], 00:05:07.764 "driver_specific": {} 00:05:07.764 }, 00:05:07.764 { 00:05:07.764 "name": "Passthru0", 00:05:07.764 "aliases": [ 00:05:07.764 "e7b9564f-6464-5ebc-8968-69fa8be71d3e" 00:05:07.764 ], 00:05:07.764 "product_name": "passthru", 00:05:07.764 "block_size": 512, 00:05:07.764 "num_blocks": 16384, 00:05:07.764 "uuid": "e7b9564f-6464-5ebc-8968-69fa8be71d3e", 00:05:07.764 "assigned_rate_limits": { 00:05:07.764 "rw_ios_per_sec": 0, 00:05:07.764 "rw_mbytes_per_sec": 0, 00:05:07.764 "r_mbytes_per_sec": 0, 00:05:07.764 "w_mbytes_per_sec": 0 00:05:07.764 }, 00:05:07.764 "claimed": false, 00:05:07.764 "zoned": false, 00:05:07.764 "supported_io_types": { 00:05:07.764 "read": true, 00:05:07.764 "write": true, 00:05:07.764 "unmap": true, 00:05:07.764 "flush": true, 00:05:07.764 "reset": true, 00:05:07.764 "nvme_admin": false, 00:05:07.764 "nvme_io": false, 00:05:07.764 "nvme_io_md": false, 00:05:07.764 "write_zeroes": true, 00:05:07.764 "zcopy": true, 00:05:07.764 "get_zone_info": false, 00:05:07.764 "zone_management": false, 00:05:07.764 "zone_append": false, 00:05:07.764 "compare": false, 00:05:07.764 "compare_and_write": false, 00:05:07.764 "abort": true, 00:05:07.764 "seek_hole": false, 00:05:07.764 "seek_data": false, 00:05:07.764 "copy": true, 00:05:07.764 "nvme_iov_md": false 00:05:07.764 }, 00:05:07.764 "memory_domains": [ 00:05:07.764 { 00:05:07.764 "dma_device_id": "system", 00:05:07.764 "dma_device_type": 1 00:05:07.764 }, 00:05:07.764 { 00:05:07.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.764 "dma_device_type": 2 00:05:07.764 } 00:05:07.764 ], 00:05:07.764 "driver_specific": { 00:05:07.764 "passthru": { 00:05:07.764 "name": "Passthru0", 00:05:07.764 "base_bdev_name": "Malloc0" 00:05:07.764 } 00:05:07.764 } 00:05:07.764 } 00:05:07.764 ]' 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.764 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.764 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.765 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.765 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.765 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.765 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.765 14:22:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.765 00:05:07.765 real 0m0.349s 00:05:07.765 user 0m0.212s 00:05:07.765 sys 0m0.035s 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.765 14:22:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.765 ************************************ 00:05:07.765 END TEST rpc_integrity 00:05:07.765 ************************************ 00:05:07.765 14:22:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:07.765 14:22:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.765 14:22:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.765 14:22:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 ************************************ 00:05:08.039 START TEST rpc_plugins 00:05:08.039 ************************************ 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:08.039 { 00:05:08.039 "name": "Malloc1", 00:05:08.039 "aliases": [ 00:05:08.039 "64d6cd19-c0ac-4c91-8368-d482e085954a" 00:05:08.039 ], 00:05:08.039 "product_name": "Malloc disk", 00:05:08.039 "block_size": 4096, 00:05:08.039 "num_blocks": 256, 00:05:08.039 "uuid": "64d6cd19-c0ac-4c91-8368-d482e085954a", 00:05:08.039 "assigned_rate_limits": { 00:05:08.039 "rw_ios_per_sec": 0, 00:05:08.039 "rw_mbytes_per_sec": 0, 00:05:08.039 "r_mbytes_per_sec": 0, 00:05:08.039 "w_mbytes_per_sec": 0 00:05:08.039 }, 00:05:08.039 "claimed": false, 00:05:08.039 "zoned": false, 00:05:08.039 "supported_io_types": { 00:05:08.039 "read": true, 00:05:08.039 "write": true, 00:05:08.039 "unmap": true, 00:05:08.039 "flush": true, 00:05:08.039 "reset": true, 00:05:08.039 "nvme_admin": false, 00:05:08.039 "nvme_io": false, 00:05:08.039 "nvme_io_md": false, 00:05:08.039 "write_zeroes": true, 00:05:08.039 "zcopy": true, 00:05:08.039 "get_zone_info": false, 00:05:08.039 "zone_management": false, 00:05:08.039 "zone_append": false, 00:05:08.039 "compare": false, 00:05:08.039 "compare_and_write": false, 00:05:08.039 "abort": true, 00:05:08.039 "seek_hole": false, 00:05:08.039 "seek_data": false, 00:05:08.039 "copy": true, 00:05:08.039 "nvme_iov_md": false 00:05:08.039 }, 00:05:08.039 "memory_domains": [ 00:05:08.039 { 00:05:08.039 "dma_device_id": "system", 00:05:08.039 "dma_device_type": 1 00:05:08.039 }, 00:05:08.039 { 00:05:08.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.039 "dma_device_type": 2 00:05:08.039 } 00:05:08.039 ], 00:05:08.039 "driver_specific": {} 00:05:08.039 } 00:05:08.039 ]' 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:08.039 14:22:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:08.039 00:05:08.039 real 0m0.158s 00:05:08.039 user 0m0.096s 00:05:08.039 sys 0m0.020s 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.039 14:22:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 ************************************ 00:05:08.039 END TEST rpc_plugins 00:05:08.039 ************************************ 00:05:08.039 14:22:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:08.039 14:22:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.039 14:22:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.039 14:22:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 ************************************ 00:05:08.039 START TEST rpc_trace_cmd_test 00:05:08.039 ************************************ 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:08.039 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56863", 00:05:08.039 "tpoint_group_mask": "0x8", 00:05:08.039 "iscsi_conn": { 00:05:08.039 "mask": "0x2", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "scsi": { 00:05:08.039 "mask": "0x4", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "bdev": { 00:05:08.039 "mask": "0x8", 00:05:08.039 "tpoint_mask": "0xffffffffffffffff" 00:05:08.039 }, 00:05:08.039 "nvmf_rdma": { 00:05:08.039 "mask": "0x10", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "nvmf_tcp": { 00:05:08.039 "mask": "0x20", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "ftl": { 00:05:08.039 "mask": "0x40", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "blobfs": { 00:05:08.039 "mask": "0x80", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "dsa": { 00:05:08.039 "mask": "0x200", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "thread": { 00:05:08.039 "mask": "0x400", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "nvme_pcie": { 00:05:08.039 "mask": "0x800", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "iaa": { 00:05:08.039 "mask": "0x1000", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "nvme_tcp": { 00:05:08.039 "mask": "0x2000", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "bdev_nvme": { 00:05:08.039 "mask": "0x4000", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "sock": { 00:05:08.039 "mask": "0x8000", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "blob": { 00:05:08.039 "mask": "0x10000", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "bdev_raid": { 00:05:08.039 "mask": "0x20000", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 }, 00:05:08.039 "scheduler": { 00:05:08.039 "mask": "0x40000", 00:05:08.039 "tpoint_mask": "0x0" 00:05:08.039 } 00:05:08.039 }' 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:08.039 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:08.300 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:08.300 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:08.300 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:08.300 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:08.300 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:08.300 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:08.300 14:22:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:08.300 00:05:08.300 real 0m0.257s 00:05:08.301 user 0m0.219s 00:05:08.301 sys 0m0.030s 00:05:08.301 14:22:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.301 14:22:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.301 ************************************ 00:05:08.301 END TEST rpc_trace_cmd_test 00:05:08.301 ************************************ 00:05:08.301 14:22:09 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:08.301 14:22:09 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:08.301 14:22:09 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:08.301 14:22:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.301 14:22:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.301 14:22:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.301 ************************************ 00:05:08.301 START TEST rpc_daemon_integrity 00:05:08.301 ************************************ 00:05:08.301 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:08.301 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.301 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.301 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.301 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.301 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.301 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.574 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.574 { 00:05:08.574 "name": "Malloc2", 00:05:08.574 "aliases": [ 00:05:08.574 "1d3ef1de-0194-4a41-80a8-ac0bca41a8f1" 00:05:08.574 ], 00:05:08.574 "product_name": "Malloc disk", 00:05:08.574 "block_size": 512, 00:05:08.574 "num_blocks": 16384, 00:05:08.574 "uuid": "1d3ef1de-0194-4a41-80a8-ac0bca41a8f1", 00:05:08.575 "assigned_rate_limits": { 00:05:08.575 "rw_ios_per_sec": 0, 00:05:08.575 "rw_mbytes_per_sec": 0, 00:05:08.575 "r_mbytes_per_sec": 0, 00:05:08.575 "w_mbytes_per_sec": 0 00:05:08.575 }, 00:05:08.575 "claimed": false, 00:05:08.575 "zoned": false, 00:05:08.575 "supported_io_types": { 00:05:08.575 "read": true, 00:05:08.575 "write": true, 00:05:08.575 "unmap": true, 00:05:08.575 "flush": true, 00:05:08.575 "reset": true, 00:05:08.575 "nvme_admin": false, 00:05:08.575 "nvme_io": false, 00:05:08.575 "nvme_io_md": false, 00:05:08.575 "write_zeroes": true, 00:05:08.575 "zcopy": true, 00:05:08.575 "get_zone_info": false, 00:05:08.575 "zone_management": false, 00:05:08.575 "zone_append": false, 00:05:08.575 "compare": false, 00:05:08.575 "compare_and_write": false, 00:05:08.575 "abort": true, 00:05:08.575 "seek_hole": false, 00:05:08.575 "seek_data": false, 00:05:08.575 "copy": true, 00:05:08.575 "nvme_iov_md": false 00:05:08.575 }, 00:05:08.575 "memory_domains": [ 00:05:08.575 { 00:05:08.575 "dma_device_id": "system", 00:05:08.575 "dma_device_type": 1 00:05:08.575 }, 00:05:08.575 { 00:05:08.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.575 "dma_device_type": 2 00:05:08.575 } 00:05:08.575 ], 00:05:08.575 "driver_specific": {} 00:05:08.575 } 00:05:08.575 ]' 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.575 [2024-11-20 14:22:09.464191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:08.575 [2024-11-20 14:22:09.464307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.575 [2024-11-20 14:22:09.464347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:08.575 [2024-11-20 14:22:09.464370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.575 [2024-11-20 14:22:09.467699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.575 [2024-11-20 14:22:09.467746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.575 Passthru0 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.575 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.575 { 00:05:08.575 "name": "Malloc2", 00:05:08.575 "aliases": [ 00:05:08.575 "1d3ef1de-0194-4a41-80a8-ac0bca41a8f1" 00:05:08.575 ], 00:05:08.575 "product_name": "Malloc disk", 00:05:08.575 "block_size": 512, 00:05:08.575 "num_blocks": 16384, 00:05:08.575 "uuid": "1d3ef1de-0194-4a41-80a8-ac0bca41a8f1", 00:05:08.575 "assigned_rate_limits": { 00:05:08.575 "rw_ios_per_sec": 0, 00:05:08.575 "rw_mbytes_per_sec": 0, 00:05:08.575 "r_mbytes_per_sec": 0, 00:05:08.575 "w_mbytes_per_sec": 0 00:05:08.575 }, 00:05:08.575 "claimed": true, 00:05:08.575 "claim_type": "exclusive_write", 00:05:08.575 "zoned": false, 00:05:08.575 "supported_io_types": { 00:05:08.575 "read": true, 00:05:08.575 "write": true, 00:05:08.575 "unmap": true, 00:05:08.575 "flush": true, 00:05:08.575 "reset": true, 00:05:08.575 "nvme_admin": false, 00:05:08.575 "nvme_io": false, 00:05:08.575 "nvme_io_md": false, 00:05:08.575 "write_zeroes": true, 00:05:08.575 "zcopy": true, 00:05:08.575 "get_zone_info": false, 00:05:08.575 "zone_management": false, 00:05:08.575 "zone_append": false, 00:05:08.575 "compare": false, 00:05:08.575 "compare_and_write": false, 00:05:08.575 "abort": true, 00:05:08.575 "seek_hole": false, 00:05:08.575 "seek_data": false, 00:05:08.575 "copy": true, 00:05:08.575 "nvme_iov_md": false 00:05:08.575 }, 00:05:08.575 "memory_domains": [ 00:05:08.575 { 00:05:08.575 "dma_device_id": "system", 00:05:08.575 "dma_device_type": 1 00:05:08.575 }, 00:05:08.575 { 00:05:08.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.575 "dma_device_type": 2 00:05:08.575 } 00:05:08.575 ], 00:05:08.575 "driver_specific": {} 00:05:08.575 }, 00:05:08.575 { 00:05:08.575 "name": "Passthru0", 00:05:08.575 "aliases": [ 00:05:08.575 "8e640026-8cef-505a-8a56-7cd0e697f90a" 00:05:08.575 ], 00:05:08.575 "product_name": "passthru", 00:05:08.575 "block_size": 512, 00:05:08.575 "num_blocks": 16384, 00:05:08.575 "uuid": "8e640026-8cef-505a-8a56-7cd0e697f90a", 00:05:08.575 "assigned_rate_limits": { 00:05:08.575 "rw_ios_per_sec": 0, 00:05:08.575 "rw_mbytes_per_sec": 0, 00:05:08.575 "r_mbytes_per_sec": 0, 00:05:08.575 "w_mbytes_per_sec": 0 00:05:08.575 }, 00:05:08.575 "claimed": false, 00:05:08.575 "zoned": false, 00:05:08.575 "supported_io_types": { 00:05:08.575 "read": true, 00:05:08.575 "write": true, 00:05:08.575 "unmap": true, 00:05:08.575 "flush": true, 00:05:08.575 "reset": true, 00:05:08.575 "nvme_admin": false, 00:05:08.575 "nvme_io": false, 00:05:08.575 "nvme_io_md": false, 00:05:08.575 "write_zeroes": true, 00:05:08.575 "zcopy": true, 00:05:08.575 "get_zone_info": false, 00:05:08.575 "zone_management": false, 00:05:08.575 "zone_append": false, 00:05:08.575 "compare": false, 00:05:08.575 "compare_and_write": false, 00:05:08.575 "abort": true, 00:05:08.575 "seek_hole": false, 00:05:08.575 "seek_data": false, 00:05:08.575 "copy": true, 00:05:08.575 "nvme_iov_md": false 00:05:08.575 }, 00:05:08.575 "memory_domains": [ 00:05:08.575 { 00:05:08.575 "dma_device_id": "system", 00:05:08.576 "dma_device_type": 1 00:05:08.576 }, 00:05:08.576 { 00:05:08.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.576 "dma_device_type": 2 00:05:08.576 } 00:05:08.576 ], 00:05:08.576 "driver_specific": { 00:05:08.576 "passthru": { 00:05:08.576 "name": "Passthru0", 00:05:08.576 "base_bdev_name": "Malloc2" 00:05:08.576 } 00:05:08.576 } 00:05:08.576 } 00:05:08.576 ]' 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.576 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.834 14:22:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.834 00:05:08.834 real 0m0.344s 00:05:08.834 user 0m0.207s 00:05:08.834 sys 0m0.041s 00:05:08.834 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.834 ************************************ 00:05:08.834 END TEST rpc_daemon_integrity 00:05:08.834 ************************************ 00:05:08.834 14:22:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.834 14:22:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:08.834 14:22:09 rpc -- rpc/rpc.sh@84 -- # killprocess 56863 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 56863 ']' 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@958 -- # kill -0 56863 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56863 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.834 killing process with pid 56863 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56863' 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@973 -- # kill 56863 00:05:08.834 14:22:09 rpc -- common/autotest_common.sh@978 -- # wait 56863 00:05:11.363 00:05:11.363 real 0m5.380s 00:05:11.363 user 0m5.941s 00:05:11.363 sys 0m0.953s 00:05:11.363 14:22:12 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.363 ************************************ 00:05:11.363 END TEST rpc 00:05:11.363 ************************************ 00:05:11.363 14:22:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.363 14:22:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.363 14:22:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.363 14:22:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.363 14:22:12 -- common/autotest_common.sh@10 -- # set +x 00:05:11.363 ************************************ 00:05:11.363 START TEST skip_rpc 00:05:11.363 ************************************ 00:05:11.363 14:22:12 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.363 * Looking for test storage... 00:05:11.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.363 14:22:12 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.363 14:22:12 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.363 14:22:12 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.363 14:22:12 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.363 14:22:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.364 14:22:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.364 --rc genhtml_branch_coverage=1 00:05:11.364 --rc genhtml_function_coverage=1 00:05:11.364 --rc genhtml_legend=1 00:05:11.364 --rc geninfo_all_blocks=1 00:05:11.364 --rc geninfo_unexecuted_blocks=1 00:05:11.364 00:05:11.364 ' 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.364 --rc genhtml_branch_coverage=1 00:05:11.364 --rc genhtml_function_coverage=1 00:05:11.364 --rc genhtml_legend=1 00:05:11.364 --rc geninfo_all_blocks=1 00:05:11.364 --rc geninfo_unexecuted_blocks=1 00:05:11.364 00:05:11.364 ' 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.364 --rc genhtml_branch_coverage=1 00:05:11.364 --rc genhtml_function_coverage=1 00:05:11.364 --rc genhtml_legend=1 00:05:11.364 --rc geninfo_all_blocks=1 00:05:11.364 --rc geninfo_unexecuted_blocks=1 00:05:11.364 00:05:11.364 ' 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.364 --rc genhtml_branch_coverage=1 00:05:11.364 --rc genhtml_function_coverage=1 00:05:11.364 --rc genhtml_legend=1 00:05:11.364 --rc geninfo_all_blocks=1 00:05:11.364 --rc geninfo_unexecuted_blocks=1 00:05:11.364 00:05:11.364 ' 00:05:11.364 14:22:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.364 14:22:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:11.364 14:22:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.364 14:22:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.364 ************************************ 00:05:11.364 START TEST skip_rpc 00:05:11.364 ************************************ 00:05:11.364 14:22:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:11.364 14:22:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57092 00:05:11.364 14:22:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.364 14:22:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.364 14:22:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.623 [2024-11-20 14:22:12.495898] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:11.623 [2024-11-20 14:22:12.496084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57092 ] 00:05:11.881 [2024-11-20 14:22:12.684645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.881 [2024-11-20 14:22:12.830022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57092 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57092 ']' 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57092 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57092 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57092' 00:05:17.147 killing process with pid 57092 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57092 00:05:17.147 14:22:17 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57092 00:05:19.044 00:05:19.044 real 0m7.359s 00:05:19.044 user 0m6.723s 00:05:19.044 sys 0m0.527s 00:05:19.044 14:22:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.044 14:22:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.044 ************************************ 00:05:19.044 END TEST skip_rpc 00:05:19.044 ************************************ 00:05:19.044 14:22:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.044 14:22:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.044 14:22:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.044 14:22:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.044 ************************************ 00:05:19.044 START TEST skip_rpc_with_json 00:05:19.044 ************************************ 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57202 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57202 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57202 ']' 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.044 14:22:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.044 [2024-11-20 14:22:19.893328] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:19.044 [2024-11-20 14:22:19.893494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57202 ] 00:05:19.044 [2024-11-20 14:22:20.068713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.302 [2024-11-20 14:22:20.209254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.234 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.234 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:20.234 14:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.234 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.234 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.234 [2024-11-20 14:22:21.198541] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.235 request: 00:05:20.235 { 00:05:20.235 "trtype": "tcp", 00:05:20.235 "method": "nvmf_get_transports", 00:05:20.235 "req_id": 1 00:05:20.235 } 00:05:20.235 Got JSON-RPC error response 00:05:20.235 response: 00:05:20.235 { 00:05:20.235 "code": -19, 00:05:20.235 "message": "No such device" 00:05:20.235 } 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.235 [2024-11-20 14:22:21.210838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.235 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.495 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.495 14:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.495 { 00:05:20.495 "subsystems": [ 00:05:20.495 { 00:05:20.495 "subsystem": "fsdev", 00:05:20.495 "config": [ 00:05:20.495 { 00:05:20.495 "method": "fsdev_set_opts", 00:05:20.495 "params": { 00:05:20.495 "fsdev_io_pool_size": 65535, 00:05:20.495 "fsdev_io_cache_size": 256 00:05:20.495 } 00:05:20.495 } 00:05:20.495 ] 00:05:20.495 }, 00:05:20.495 { 00:05:20.495 "subsystem": "keyring", 00:05:20.495 "config": [] 00:05:20.495 }, 00:05:20.495 { 00:05:20.495 "subsystem": "iobuf", 00:05:20.495 "config": [ 00:05:20.495 { 00:05:20.495 "method": "iobuf_set_options", 00:05:20.495 "params": { 00:05:20.495 "small_pool_count": 8192, 00:05:20.495 "large_pool_count": 1024, 00:05:20.495 "small_bufsize": 8192, 00:05:20.495 "large_bufsize": 135168, 00:05:20.495 "enable_numa": false 00:05:20.495 } 00:05:20.495 } 00:05:20.495 ] 00:05:20.495 }, 00:05:20.495 { 00:05:20.495 "subsystem": "sock", 00:05:20.495 "config": [ 00:05:20.495 { 00:05:20.495 "method": "sock_set_default_impl", 00:05:20.495 "params": { 00:05:20.495 "impl_name": "posix" 00:05:20.495 } 00:05:20.495 }, 00:05:20.495 { 00:05:20.495 "method": "sock_impl_set_options", 00:05:20.495 "params": { 00:05:20.495 "impl_name": "ssl", 00:05:20.495 "recv_buf_size": 4096, 00:05:20.495 "send_buf_size": 4096, 00:05:20.495 "enable_recv_pipe": true, 00:05:20.495 "enable_quickack": false, 00:05:20.495 "enable_placement_id": 0, 00:05:20.495 "enable_zerocopy_send_server": true, 00:05:20.495 "enable_zerocopy_send_client": false, 00:05:20.495 "zerocopy_threshold": 0, 00:05:20.496 "tls_version": 0, 00:05:20.496 "enable_ktls": false 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "sock_impl_set_options", 00:05:20.496 "params": { 00:05:20.496 "impl_name": "posix", 00:05:20.496 "recv_buf_size": 2097152, 00:05:20.496 "send_buf_size": 2097152, 00:05:20.496 "enable_recv_pipe": true, 00:05:20.496 "enable_quickack": false, 00:05:20.496 "enable_placement_id": 0, 00:05:20.496 "enable_zerocopy_send_server": true, 00:05:20.496 "enable_zerocopy_send_client": false, 00:05:20.496 "zerocopy_threshold": 0, 00:05:20.496 "tls_version": 0, 00:05:20.496 "enable_ktls": false 00:05:20.496 } 00:05:20.496 } 00:05:20.496 ] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "vmd", 00:05:20.496 "config": [] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "accel", 00:05:20.496 "config": [ 00:05:20.496 { 00:05:20.496 "method": "accel_set_options", 00:05:20.496 "params": { 00:05:20.496 "small_cache_size": 128, 00:05:20.496 "large_cache_size": 16, 00:05:20.496 "task_count": 2048, 00:05:20.496 "sequence_count": 2048, 00:05:20.496 "buf_count": 2048 00:05:20.496 } 00:05:20.496 } 00:05:20.496 ] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "bdev", 00:05:20.496 "config": [ 00:05:20.496 { 00:05:20.496 "method": "bdev_set_options", 00:05:20.496 "params": { 00:05:20.496 "bdev_io_pool_size": 65535, 00:05:20.496 "bdev_io_cache_size": 256, 00:05:20.496 "bdev_auto_examine": true, 00:05:20.496 "iobuf_small_cache_size": 128, 00:05:20.496 "iobuf_large_cache_size": 16 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "bdev_raid_set_options", 00:05:20.496 "params": { 00:05:20.496 "process_window_size_kb": 1024, 00:05:20.496 "process_max_bandwidth_mb_sec": 0 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "bdev_iscsi_set_options", 00:05:20.496 "params": { 00:05:20.496 "timeout_sec": 30 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "bdev_nvme_set_options", 00:05:20.496 "params": { 00:05:20.496 "action_on_timeout": "none", 00:05:20.496 "timeout_us": 0, 00:05:20.496 "timeout_admin_us": 0, 00:05:20.496 "keep_alive_timeout_ms": 10000, 00:05:20.496 "arbitration_burst": 0, 00:05:20.496 "low_priority_weight": 0, 00:05:20.496 "medium_priority_weight": 0, 00:05:20.496 "high_priority_weight": 0, 00:05:20.496 "nvme_adminq_poll_period_us": 10000, 00:05:20.496 "nvme_ioq_poll_period_us": 0, 00:05:20.496 "io_queue_requests": 0, 00:05:20.496 "delay_cmd_submit": true, 00:05:20.496 "transport_retry_count": 4, 00:05:20.496 "bdev_retry_count": 3, 00:05:20.496 "transport_ack_timeout": 0, 00:05:20.496 "ctrlr_loss_timeout_sec": 0, 00:05:20.496 "reconnect_delay_sec": 0, 00:05:20.496 "fast_io_fail_timeout_sec": 0, 00:05:20.496 "disable_auto_failback": false, 00:05:20.496 "generate_uuids": false, 00:05:20.496 "transport_tos": 0, 00:05:20.496 "nvme_error_stat": false, 00:05:20.496 "rdma_srq_size": 0, 00:05:20.496 "io_path_stat": false, 00:05:20.496 "allow_accel_sequence": false, 00:05:20.496 "rdma_max_cq_size": 0, 00:05:20.496 "rdma_cm_event_timeout_ms": 0, 00:05:20.496 "dhchap_digests": [ 00:05:20.496 "sha256", 00:05:20.496 "sha384", 00:05:20.496 "sha512" 00:05:20.496 ], 00:05:20.496 "dhchap_dhgroups": [ 00:05:20.496 "null", 00:05:20.496 "ffdhe2048", 00:05:20.496 "ffdhe3072", 00:05:20.496 "ffdhe4096", 00:05:20.496 "ffdhe6144", 00:05:20.496 "ffdhe8192" 00:05:20.496 ] 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "bdev_nvme_set_hotplug", 00:05:20.496 "params": { 00:05:20.496 "period_us": 100000, 00:05:20.496 "enable": false 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "bdev_wait_for_examine" 00:05:20.496 } 00:05:20.496 ] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "scsi", 00:05:20.496 "config": null 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "scheduler", 00:05:20.496 "config": [ 00:05:20.496 { 00:05:20.496 "method": "framework_set_scheduler", 00:05:20.496 "params": { 00:05:20.496 "name": "static" 00:05:20.496 } 00:05:20.496 } 00:05:20.496 ] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "vhost_scsi", 00:05:20.496 "config": [] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "vhost_blk", 00:05:20.496 "config": [] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "ublk", 00:05:20.496 "config": [] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "nbd", 00:05:20.496 "config": [] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "nvmf", 00:05:20.496 "config": [ 00:05:20.496 { 00:05:20.496 "method": "nvmf_set_config", 00:05:20.496 "params": { 00:05:20.496 "discovery_filter": "match_any", 00:05:20.496 "admin_cmd_passthru": { 00:05:20.496 "identify_ctrlr": false 00:05:20.496 }, 00:05:20.496 "dhchap_digests": [ 00:05:20.496 "sha256", 00:05:20.496 "sha384", 00:05:20.496 "sha512" 00:05:20.496 ], 00:05:20.496 "dhchap_dhgroups": [ 00:05:20.496 "null", 00:05:20.496 "ffdhe2048", 00:05:20.496 "ffdhe3072", 00:05:20.496 "ffdhe4096", 00:05:20.496 "ffdhe6144", 00:05:20.496 "ffdhe8192" 00:05:20.496 ] 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "nvmf_set_max_subsystems", 00:05:20.496 "params": { 00:05:20.496 "max_subsystems": 1024 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "nvmf_set_crdt", 00:05:20.496 "params": { 00:05:20.496 "crdt1": 0, 00:05:20.496 "crdt2": 0, 00:05:20.496 "crdt3": 0 00:05:20.496 } 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "method": "nvmf_create_transport", 00:05:20.496 "params": { 00:05:20.496 "trtype": "TCP", 00:05:20.496 "max_queue_depth": 128, 00:05:20.496 "max_io_qpairs_per_ctrlr": 127, 00:05:20.496 "in_capsule_data_size": 4096, 00:05:20.496 "max_io_size": 131072, 00:05:20.496 "io_unit_size": 131072, 00:05:20.496 "max_aq_depth": 128, 00:05:20.496 "num_shared_buffers": 511, 00:05:20.496 "buf_cache_size": 4294967295, 00:05:20.496 "dif_insert_or_strip": false, 00:05:20.496 "zcopy": false, 00:05:20.496 "c2h_success": true, 00:05:20.496 "sock_priority": 0, 00:05:20.496 "abort_timeout_sec": 1, 00:05:20.496 "ack_timeout": 0, 00:05:20.496 "data_wr_pool_size": 0 00:05:20.496 } 00:05:20.496 } 00:05:20.496 ] 00:05:20.496 }, 00:05:20.496 { 00:05:20.496 "subsystem": "iscsi", 00:05:20.497 "config": [ 00:05:20.497 { 00:05:20.497 "method": "iscsi_set_options", 00:05:20.497 "params": { 00:05:20.497 "node_base": "iqn.2016-06.io.spdk", 00:05:20.497 "max_sessions": 128, 00:05:20.497 "max_connections_per_session": 2, 00:05:20.497 "max_queue_depth": 64, 00:05:20.497 "default_time2wait": 2, 00:05:20.497 "default_time2retain": 20, 00:05:20.497 "first_burst_length": 8192, 00:05:20.497 "immediate_data": true, 00:05:20.497 "allow_duplicated_isid": false, 00:05:20.497 "error_recovery_level": 0, 00:05:20.497 "nop_timeout": 60, 00:05:20.497 "nop_in_interval": 30, 00:05:20.497 "disable_chap": false, 00:05:20.497 "require_chap": false, 00:05:20.497 "mutual_chap": false, 00:05:20.497 "chap_group": 0, 00:05:20.497 "max_large_datain_per_connection": 64, 00:05:20.497 "max_r2t_per_connection": 4, 00:05:20.497 "pdu_pool_size": 36864, 00:05:20.497 "immediate_data_pool_size": 16384, 00:05:20.497 "data_out_pool_size": 2048 00:05:20.497 } 00:05:20.497 } 00:05:20.497 ] 00:05:20.497 } 00:05:20.497 ] 00:05:20.497 } 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57202 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57202 ']' 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57202 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57202 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.497 killing process with pid 57202 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57202' 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57202 00:05:20.497 14:22:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57202 00:05:23.026 14:22:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57258 00:05:23.026 14:22:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.026 14:22:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57258 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57258 ']' 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57258 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57258 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.276 killing process with pid 57258 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57258' 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57258 00:05:28.276 14:22:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57258 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.170 00:05:30.170 real 0m11.238s 00:05:30.170 user 0m10.616s 00:05:30.170 sys 0m1.033s 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.170 ************************************ 00:05:30.170 END TEST skip_rpc_with_json 00:05:30.170 ************************************ 00:05:30.170 14:22:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.170 14:22:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.170 14:22:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.170 14:22:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.170 ************************************ 00:05:30.170 START TEST skip_rpc_with_delay 00:05:30.170 ************************************ 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.170 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.428 [2024-11-20 14:22:31.241010] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.428 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:30.428 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.428 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:30.428 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.428 00:05:30.428 real 0m0.250s 00:05:30.428 user 0m0.131s 00:05:30.428 sys 0m0.116s 00:05:30.428 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.428 14:22:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.428 ************************************ 00:05:30.428 END TEST skip_rpc_with_delay 00:05:30.428 ************************************ 00:05:30.428 14:22:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.428 14:22:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.428 14:22:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.428 14:22:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.428 14:22:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.428 14:22:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.428 ************************************ 00:05:30.428 START TEST exit_on_failed_rpc_init 00:05:30.428 ************************************ 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57386 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57386 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57386 ']' 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.428 14:22:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.428 [2024-11-20 14:22:31.472181] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:30.428 [2024-11-20 14:22:31.472339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57386 ] 00:05:30.684 [2024-11-20 14:22:31.648707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.941 [2024-11-20 14:22:31.778829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.872 14:22:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.872 [2024-11-20 14:22:32.874882] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:31.872 [2024-11-20 14:22:32.875076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57409 ] 00:05:32.130 [2024-11-20 14:22:33.064481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.388 [2024-11-20 14:22:33.218925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.388 [2024-11-20 14:22:33.219083] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.388 [2024-11-20 14:22:33.219111] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.388 [2024-11-20 14:22:33.219135] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57386 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57386 ']' 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57386 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57386 00:05:32.648 killing process with pid 57386 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57386' 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57386 00:05:32.648 14:22:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57386 00:05:35.177 ************************************ 00:05:35.177 END TEST exit_on_failed_rpc_init 00:05:35.177 ************************************ 00:05:35.177 00:05:35.177 real 0m4.465s 00:05:35.177 user 0m4.891s 00:05:35.177 sys 0m0.674s 00:05:35.177 14:22:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.177 14:22:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.177 14:22:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:35.177 ************************************ 00:05:35.177 END TEST skip_rpc 00:05:35.177 ************************************ 00:05:35.177 00:05:35.177 real 0m23.687s 00:05:35.177 user 0m22.526s 00:05:35.177 sys 0m2.557s 00:05:35.177 14:22:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.177 14:22:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.177 14:22:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.177 14:22:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.177 14:22:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.177 14:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:35.177 ************************************ 00:05:35.177 START TEST rpc_client 00:05:35.177 ************************************ 00:05:35.177 14:22:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:35.177 * Looking for test storage... 00:05:35.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:35.177 14:22:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.177 14:22:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.177 14:22:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.177 14:22:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.177 --rc genhtml_branch_coverage=1 00:05:35.177 --rc genhtml_function_coverage=1 00:05:35.177 --rc genhtml_legend=1 00:05:35.177 --rc geninfo_all_blocks=1 00:05:35.177 --rc geninfo_unexecuted_blocks=1 00:05:35.177 00:05:35.177 ' 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.177 --rc genhtml_branch_coverage=1 00:05:35.177 --rc genhtml_function_coverage=1 00:05:35.177 --rc genhtml_legend=1 00:05:35.177 --rc geninfo_all_blocks=1 00:05:35.177 --rc geninfo_unexecuted_blocks=1 00:05:35.177 00:05:35.177 ' 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.177 --rc genhtml_branch_coverage=1 00:05:35.177 --rc genhtml_function_coverage=1 00:05:35.177 --rc genhtml_legend=1 00:05:35.177 --rc geninfo_all_blocks=1 00:05:35.177 --rc geninfo_unexecuted_blocks=1 00:05:35.177 00:05:35.177 ' 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.177 --rc genhtml_branch_coverage=1 00:05:35.177 --rc genhtml_function_coverage=1 00:05:35.177 --rc genhtml_legend=1 00:05:35.177 --rc geninfo_all_blocks=1 00:05:35.177 --rc geninfo_unexecuted_blocks=1 00:05:35.177 00:05:35.177 ' 00:05:35.177 14:22:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:35.177 OK 00:05:35.177 14:22:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:35.177 00:05:35.177 real 0m0.220s 00:05:35.177 user 0m0.128s 00:05:35.177 sys 0m0.101s 00:05:35.177 ************************************ 00:05:35.177 END TEST rpc_client 00:05:35.177 ************************************ 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.177 14:22:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:35.177 14:22:36 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.177 14:22:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.177 14:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.177 14:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.177 ************************************ 00:05:35.177 START TEST json_config 00:05:35.177 ************************************ 00:05:35.177 14:22:36 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:35.177 14:22:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.177 14:22:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.177 14:22:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.437 14:22:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.437 14:22:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.437 14:22:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.437 14:22:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.437 14:22:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.437 14:22:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.437 14:22:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.437 14:22:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.437 14:22:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.437 14:22:36 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.437 14:22:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.437 14:22:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.437 14:22:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.437 14:22:36 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.437 14:22:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.437 14:22:36 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.437 14:22:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.437 14:22:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.437 14:22:36 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.437 14:22:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.437 14:22:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.437 14:22:36 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.437 14:22:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.437 14:22:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.437 --rc genhtml_branch_coverage=1 00:05:35.437 --rc genhtml_function_coverage=1 00:05:35.437 --rc genhtml_legend=1 00:05:35.437 --rc geninfo_all_blocks=1 00:05:35.437 --rc geninfo_unexecuted_blocks=1 00:05:35.437 00:05:35.437 ' 00:05:35.437 14:22:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.437 --rc genhtml_branch_coverage=1 00:05:35.437 --rc genhtml_function_coverage=1 00:05:35.437 --rc genhtml_legend=1 00:05:35.437 --rc geninfo_all_blocks=1 00:05:35.437 --rc geninfo_unexecuted_blocks=1 00:05:35.437 00:05:35.437 ' 00:05:35.437 14:22:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.437 --rc genhtml_branch_coverage=1 00:05:35.437 --rc genhtml_function_coverage=1 00:05:35.437 --rc genhtml_legend=1 00:05:35.437 --rc geninfo_all_blocks=1 00:05:35.437 --rc geninfo_unexecuted_blocks=1 00:05:35.437 00:05:35.437 ' 00:05:35.437 14:22:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.437 --rc genhtml_branch_coverage=1 00:05:35.437 --rc genhtml_function_coverage=1 00:05:35.437 --rc genhtml_legend=1 00:05:35.437 --rc geninfo_all_blocks=1 00:05:35.437 --rc geninfo_unexecuted_blocks=1 00:05:35.437 00:05:35.437 ' 00:05:35.437 14:22:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00c99eb5-4b77-4cf8-b25b-b17f9cba7a78 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00c99eb5-4b77-4cf8-b25b-b17f9cba7a78 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.437 14:22:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.437 14:22:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.437 14:22:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.437 14:22:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.437 14:22:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.437 14:22:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.437 14:22:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.437 14:22:36 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.437 14:22:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.437 14:22:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.438 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.438 14:22:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.438 WARNING: No tests are enabled so not running JSON configuration tests 00:05:35.438 14:22:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.438 14:22:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.438 14:22:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.438 14:22:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.438 14:22:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.438 14:22:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:35.438 14:22:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:35.438 00:05:35.438 real 0m0.180s 00:05:35.438 user 0m0.126s 00:05:35.438 sys 0m0.055s 00:05:35.438 ************************************ 00:05:35.438 END TEST json_config 00:05:35.438 ************************************ 00:05:35.438 14:22:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.438 14:22:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.438 14:22:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.438 14:22:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.438 14:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.438 14:22:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.438 ************************************ 00:05:35.438 START TEST json_config_extra_key 00:05:35.438 ************************************ 00:05:35.438 14:22:36 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.438 14:22:36 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.438 14:22:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.438 14:22:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.697 14:22:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.697 14:22:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:35.697 14:22:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.697 14:22:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.697 --rc genhtml_branch_coverage=1 00:05:35.697 --rc genhtml_function_coverage=1 00:05:35.697 --rc genhtml_legend=1 00:05:35.697 --rc geninfo_all_blocks=1 00:05:35.697 --rc geninfo_unexecuted_blocks=1 00:05:35.697 00:05:35.697 ' 00:05:35.697 14:22:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.697 --rc genhtml_branch_coverage=1 00:05:35.697 --rc genhtml_function_coverage=1 00:05:35.697 --rc genhtml_legend=1 00:05:35.697 --rc geninfo_all_blocks=1 00:05:35.697 --rc geninfo_unexecuted_blocks=1 00:05:35.697 00:05:35.697 ' 00:05:35.697 14:22:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.697 --rc genhtml_branch_coverage=1 00:05:35.697 --rc genhtml_function_coverage=1 00:05:35.697 --rc genhtml_legend=1 00:05:35.697 --rc geninfo_all_blocks=1 00:05:35.697 --rc geninfo_unexecuted_blocks=1 00:05:35.697 00:05:35.697 ' 00:05:35.697 14:22:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.697 --rc genhtml_branch_coverage=1 00:05:35.697 --rc genhtml_function_coverage=1 00:05:35.697 --rc genhtml_legend=1 00:05:35.697 --rc geninfo_all_blocks=1 00:05:35.697 --rc geninfo_unexecuted_blocks=1 00:05:35.697 00:05:35.697 ' 00:05:35.697 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00c99eb5-4b77-4cf8-b25b-b17f9cba7a78 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00c99eb5-4b77-4cf8-b25b-b17f9cba7a78 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.697 14:22:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.698 14:22:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.698 14:22:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.698 14:22:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.698 14:22:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.698 14:22:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.698 14:22:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.698 14:22:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.698 14:22:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.698 14:22:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.698 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.698 14:22:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.698 INFO: launching applications... 00:05:35.698 Waiting for target to run... 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.698 14:22:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57616 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.698 14:22:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57616 /var/tmp/spdk_tgt.sock 00:05:35.698 14:22:36 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57616 ']' 00:05:35.698 14:22:36 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.698 14:22:36 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.698 14:22:36 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.698 14:22:36 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.698 14:22:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.698 [2024-11-20 14:22:36.695990] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:35.698 [2024-11-20 14:22:36.696362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57616 ] 00:05:36.265 [2024-11-20 14:22:37.218088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.522 [2024-11-20 14:22:37.338737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.087 14:22:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.087 14:22:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:37.087 00:05:37.087 14:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:37.087 INFO: shutting down applications... 00:05:37.087 14:22:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57616 ]] 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57616 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.087 14:22:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57616 00:05:37.088 14:22:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.655 14:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.655 14:22:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.655 14:22:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57616 00:05:37.655 14:22:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.221 14:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.221 14:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.221 14:22:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57616 00:05:38.221 14:22:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.786 14:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.786 14:22:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.786 14:22:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57616 00:05:38.786 14:22:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.044 14:22:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.044 14:22:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.044 14:22:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57616 00:05:39.044 14:22:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.610 14:22:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.610 14:22:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.610 14:22:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57616 00:05:39.610 14:22:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.176 SPDK target shutdown done 00:05:40.176 14:22:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.176 14:22:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.176 14:22:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57616 00:05:40.176 14:22:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.176 14:22:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:40.176 14:22:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.176 14:22:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.176 Success 00:05:40.176 14:22:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:40.176 ************************************ 00:05:40.176 END TEST json_config_extra_key 00:05:40.176 ************************************ 00:05:40.176 00:05:40.176 real 0m4.684s 00:05:40.176 user 0m4.123s 00:05:40.176 sys 0m0.651s 00:05:40.176 14:22:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.176 14:22:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.176 14:22:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.176 14:22:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.176 14:22:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.176 14:22:41 -- common/autotest_common.sh@10 -- # set +x 00:05:40.176 ************************************ 00:05:40.176 START TEST alias_rpc 00:05:40.176 ************************************ 00:05:40.176 14:22:41 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:40.176 * Looking for test storage... 00:05:40.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:40.176 14:22:41 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.176 14:22:41 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.176 14:22:41 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:40.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.435 14:22:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.435 --rc genhtml_branch_coverage=1 00:05:40.435 --rc genhtml_function_coverage=1 00:05:40.435 --rc genhtml_legend=1 00:05:40.435 --rc geninfo_all_blocks=1 00:05:40.435 --rc geninfo_unexecuted_blocks=1 00:05:40.435 00:05:40.435 ' 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.435 --rc genhtml_branch_coverage=1 00:05:40.435 --rc genhtml_function_coverage=1 00:05:40.435 --rc genhtml_legend=1 00:05:40.435 --rc geninfo_all_blocks=1 00:05:40.435 --rc geninfo_unexecuted_blocks=1 00:05:40.435 00:05:40.435 ' 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.435 --rc genhtml_branch_coverage=1 00:05:40.435 --rc genhtml_function_coverage=1 00:05:40.435 --rc genhtml_legend=1 00:05:40.435 --rc geninfo_all_blocks=1 00:05:40.435 --rc geninfo_unexecuted_blocks=1 00:05:40.435 00:05:40.435 ' 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.435 --rc genhtml_branch_coverage=1 00:05:40.435 --rc genhtml_function_coverage=1 00:05:40.435 --rc genhtml_legend=1 00:05:40.435 --rc geninfo_all_blocks=1 00:05:40.435 --rc geninfo_unexecuted_blocks=1 00:05:40.435 00:05:40.435 ' 00:05:40.435 14:22:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:40.435 14:22:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57733 00:05:40.435 14:22:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:40.435 14:22:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57733 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57733 ']' 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.435 14:22:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.435 [2024-11-20 14:22:41.401899] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:40.435 [2024-11-20 14:22:41.402416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57733 ] 00:05:40.693 [2024-11-20 14:22:41.601324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.693 [2024-11-20 14:22:41.729958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.626 14:22:42 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.626 14:22:42 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:41.626 14:22:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:42.192 14:22:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57733 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57733 ']' 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57733 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57733 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57733' 00:05:42.192 killing process with pid 57733 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 57733 00:05:42.192 14:22:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 57733 00:05:44.782 00:05:44.782 real 0m4.238s 00:05:44.782 user 0m4.478s 00:05:44.782 sys 0m0.677s 00:05:44.782 ************************************ 00:05:44.782 END TEST alias_rpc 00:05:44.782 ************************************ 00:05:44.782 14:22:45 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.782 14:22:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.782 14:22:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:44.782 14:22:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.782 14:22:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.782 14:22:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.782 14:22:45 -- common/autotest_common.sh@10 -- # set +x 00:05:44.782 ************************************ 00:05:44.782 START TEST spdkcli_tcp 00:05:44.782 ************************************ 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.782 * Looking for test storage... 00:05:44.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.782 14:22:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.782 --rc genhtml_branch_coverage=1 00:05:44.782 --rc genhtml_function_coverage=1 00:05:44.782 --rc genhtml_legend=1 00:05:44.782 --rc geninfo_all_blocks=1 00:05:44.782 --rc geninfo_unexecuted_blocks=1 00:05:44.782 00:05:44.782 ' 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.782 --rc genhtml_branch_coverage=1 00:05:44.782 --rc genhtml_function_coverage=1 00:05:44.782 --rc genhtml_legend=1 00:05:44.782 --rc geninfo_all_blocks=1 00:05:44.782 --rc geninfo_unexecuted_blocks=1 00:05:44.782 00:05:44.782 ' 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.782 --rc genhtml_branch_coverage=1 00:05:44.782 --rc genhtml_function_coverage=1 00:05:44.782 --rc genhtml_legend=1 00:05:44.782 --rc geninfo_all_blocks=1 00:05:44.782 --rc geninfo_unexecuted_blocks=1 00:05:44.782 00:05:44.782 ' 00:05:44.782 14:22:45 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.782 --rc genhtml_branch_coverage=1 00:05:44.782 --rc genhtml_function_coverage=1 00:05:44.782 --rc genhtml_legend=1 00:05:44.782 --rc geninfo_all_blocks=1 00:05:44.782 --rc geninfo_unexecuted_blocks=1 00:05:44.782 00:05:44.782 ' 00:05:44.782 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:44.782 14:22:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:44.782 14:22:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:44.782 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.782 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.782 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.782 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.783 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57840 00:05:44.783 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57840 00:05:44.783 14:22:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57840 ']' 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.783 14:22:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.783 [2024-11-20 14:22:45.705859] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:44.783 [2024-11-20 14:22:45.706052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57840 ] 00:05:45.066 [2024-11-20 14:22:45.891058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.066 [2024-11-20 14:22:46.035069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.066 [2024-11-20 14:22:46.035077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.000 14:22:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.000 14:22:46 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:46.000 14:22:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57857 00:05:46.000 14:22:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:46.000 14:22:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:46.258 [ 00:05:46.258 "bdev_malloc_delete", 00:05:46.258 "bdev_malloc_create", 00:05:46.258 "bdev_null_resize", 00:05:46.258 "bdev_null_delete", 00:05:46.258 "bdev_null_create", 00:05:46.259 "bdev_nvme_cuse_unregister", 00:05:46.259 "bdev_nvme_cuse_register", 00:05:46.259 "bdev_opal_new_user", 00:05:46.259 "bdev_opal_set_lock_state", 00:05:46.259 "bdev_opal_delete", 00:05:46.259 "bdev_opal_get_info", 00:05:46.259 "bdev_opal_create", 00:05:46.259 "bdev_nvme_opal_revert", 00:05:46.259 "bdev_nvme_opal_init", 00:05:46.259 "bdev_nvme_send_cmd", 00:05:46.259 "bdev_nvme_set_keys", 00:05:46.259 "bdev_nvme_get_path_iostat", 00:05:46.259 "bdev_nvme_get_mdns_discovery_info", 00:05:46.259 "bdev_nvme_stop_mdns_discovery", 00:05:46.259 "bdev_nvme_start_mdns_discovery", 00:05:46.259 "bdev_nvme_set_multipath_policy", 00:05:46.259 "bdev_nvme_set_preferred_path", 00:05:46.259 "bdev_nvme_get_io_paths", 00:05:46.259 "bdev_nvme_remove_error_injection", 00:05:46.259 "bdev_nvme_add_error_injection", 00:05:46.259 "bdev_nvme_get_discovery_info", 00:05:46.259 "bdev_nvme_stop_discovery", 00:05:46.259 "bdev_nvme_start_discovery", 00:05:46.259 "bdev_nvme_get_controller_health_info", 00:05:46.259 "bdev_nvme_disable_controller", 00:05:46.259 "bdev_nvme_enable_controller", 00:05:46.259 "bdev_nvme_reset_controller", 00:05:46.259 "bdev_nvme_get_transport_statistics", 00:05:46.259 "bdev_nvme_apply_firmware", 00:05:46.259 "bdev_nvme_detach_controller", 00:05:46.259 "bdev_nvme_get_controllers", 00:05:46.259 "bdev_nvme_attach_controller", 00:05:46.259 "bdev_nvme_set_hotplug", 00:05:46.259 "bdev_nvme_set_options", 00:05:46.259 "bdev_passthru_delete", 00:05:46.259 "bdev_passthru_create", 00:05:46.259 "bdev_lvol_set_parent_bdev", 00:05:46.259 "bdev_lvol_set_parent", 00:05:46.259 "bdev_lvol_check_shallow_copy", 00:05:46.259 "bdev_lvol_start_shallow_copy", 00:05:46.259 "bdev_lvol_grow_lvstore", 00:05:46.259 "bdev_lvol_get_lvols", 00:05:46.259 "bdev_lvol_get_lvstores", 00:05:46.259 "bdev_lvol_delete", 00:05:46.259 "bdev_lvol_set_read_only", 00:05:46.259 "bdev_lvol_resize", 00:05:46.259 "bdev_lvol_decouple_parent", 00:05:46.259 "bdev_lvol_inflate", 00:05:46.259 "bdev_lvol_rename", 00:05:46.259 "bdev_lvol_clone_bdev", 00:05:46.259 "bdev_lvol_clone", 00:05:46.259 "bdev_lvol_snapshot", 00:05:46.259 "bdev_lvol_create", 00:05:46.259 "bdev_lvol_delete_lvstore", 00:05:46.259 "bdev_lvol_rename_lvstore", 00:05:46.259 "bdev_lvol_create_lvstore", 00:05:46.259 "bdev_raid_set_options", 00:05:46.259 "bdev_raid_remove_base_bdev", 00:05:46.259 "bdev_raid_add_base_bdev", 00:05:46.259 "bdev_raid_delete", 00:05:46.259 "bdev_raid_create", 00:05:46.259 "bdev_raid_get_bdevs", 00:05:46.259 "bdev_error_inject_error", 00:05:46.259 "bdev_error_delete", 00:05:46.259 "bdev_error_create", 00:05:46.259 "bdev_split_delete", 00:05:46.259 "bdev_split_create", 00:05:46.259 "bdev_delay_delete", 00:05:46.259 "bdev_delay_create", 00:05:46.259 "bdev_delay_update_latency", 00:05:46.259 "bdev_zone_block_delete", 00:05:46.259 "bdev_zone_block_create", 00:05:46.259 "blobfs_create", 00:05:46.259 "blobfs_detect", 00:05:46.259 "blobfs_set_cache_size", 00:05:46.259 "bdev_aio_delete", 00:05:46.259 "bdev_aio_rescan", 00:05:46.259 "bdev_aio_create", 00:05:46.259 "bdev_ftl_set_property", 00:05:46.259 "bdev_ftl_get_properties", 00:05:46.259 "bdev_ftl_get_stats", 00:05:46.259 "bdev_ftl_unmap", 00:05:46.259 "bdev_ftl_unload", 00:05:46.259 "bdev_ftl_delete", 00:05:46.259 "bdev_ftl_load", 00:05:46.259 "bdev_ftl_create", 00:05:46.259 "bdev_virtio_attach_controller", 00:05:46.259 "bdev_virtio_scsi_get_devices", 00:05:46.259 "bdev_virtio_detach_controller", 00:05:46.259 "bdev_virtio_blk_set_hotplug", 00:05:46.259 "bdev_iscsi_delete", 00:05:46.259 "bdev_iscsi_create", 00:05:46.259 "bdev_iscsi_set_options", 00:05:46.259 "accel_error_inject_error", 00:05:46.259 "ioat_scan_accel_module", 00:05:46.259 "dsa_scan_accel_module", 00:05:46.259 "iaa_scan_accel_module", 00:05:46.259 "keyring_file_remove_key", 00:05:46.259 "keyring_file_add_key", 00:05:46.259 "keyring_linux_set_options", 00:05:46.259 "fsdev_aio_delete", 00:05:46.259 "fsdev_aio_create", 00:05:46.259 "iscsi_get_histogram", 00:05:46.259 "iscsi_enable_histogram", 00:05:46.259 "iscsi_set_options", 00:05:46.259 "iscsi_get_auth_groups", 00:05:46.259 "iscsi_auth_group_remove_secret", 00:05:46.259 "iscsi_auth_group_add_secret", 00:05:46.259 "iscsi_delete_auth_group", 00:05:46.259 "iscsi_create_auth_group", 00:05:46.259 "iscsi_set_discovery_auth", 00:05:46.259 "iscsi_get_options", 00:05:46.259 "iscsi_target_node_request_logout", 00:05:46.259 "iscsi_target_node_set_redirect", 00:05:46.259 "iscsi_target_node_set_auth", 00:05:46.259 "iscsi_target_node_add_lun", 00:05:46.259 "iscsi_get_stats", 00:05:46.259 "iscsi_get_connections", 00:05:46.259 "iscsi_portal_group_set_auth", 00:05:46.259 "iscsi_start_portal_group", 00:05:46.259 "iscsi_delete_portal_group", 00:05:46.259 "iscsi_create_portal_group", 00:05:46.259 "iscsi_get_portal_groups", 00:05:46.259 "iscsi_delete_target_node", 00:05:46.259 "iscsi_target_node_remove_pg_ig_maps", 00:05:46.259 "iscsi_target_node_add_pg_ig_maps", 00:05:46.259 "iscsi_create_target_node", 00:05:46.259 "iscsi_get_target_nodes", 00:05:46.259 "iscsi_delete_initiator_group", 00:05:46.259 "iscsi_initiator_group_remove_initiators", 00:05:46.259 "iscsi_initiator_group_add_initiators", 00:05:46.259 "iscsi_create_initiator_group", 00:05:46.259 "iscsi_get_initiator_groups", 00:05:46.259 "nvmf_set_crdt", 00:05:46.259 "nvmf_set_config", 00:05:46.259 "nvmf_set_max_subsystems", 00:05:46.259 "nvmf_stop_mdns_prr", 00:05:46.259 "nvmf_publish_mdns_prr", 00:05:46.259 "nvmf_subsystem_get_listeners", 00:05:46.259 "nvmf_subsystem_get_qpairs", 00:05:46.259 "nvmf_subsystem_get_controllers", 00:05:46.259 "nvmf_get_stats", 00:05:46.259 "nvmf_get_transports", 00:05:46.259 "nvmf_create_transport", 00:05:46.259 "nvmf_get_targets", 00:05:46.259 "nvmf_delete_target", 00:05:46.259 "nvmf_create_target", 00:05:46.259 "nvmf_subsystem_allow_any_host", 00:05:46.259 "nvmf_subsystem_set_keys", 00:05:46.259 "nvmf_subsystem_remove_host", 00:05:46.259 "nvmf_subsystem_add_host", 00:05:46.259 "nvmf_ns_remove_host", 00:05:46.259 "nvmf_ns_add_host", 00:05:46.259 "nvmf_subsystem_remove_ns", 00:05:46.259 "nvmf_subsystem_set_ns_ana_group", 00:05:46.259 "nvmf_subsystem_add_ns", 00:05:46.259 "nvmf_subsystem_listener_set_ana_state", 00:05:46.259 "nvmf_discovery_get_referrals", 00:05:46.259 "nvmf_discovery_remove_referral", 00:05:46.259 "nvmf_discovery_add_referral", 00:05:46.259 "nvmf_subsystem_remove_listener", 00:05:46.259 "nvmf_subsystem_add_listener", 00:05:46.259 "nvmf_delete_subsystem", 00:05:46.259 "nvmf_create_subsystem", 00:05:46.259 "nvmf_get_subsystems", 00:05:46.259 "env_dpdk_get_mem_stats", 00:05:46.259 "nbd_get_disks", 00:05:46.259 "nbd_stop_disk", 00:05:46.259 "nbd_start_disk", 00:05:46.259 "ublk_recover_disk", 00:05:46.259 "ublk_get_disks", 00:05:46.259 "ublk_stop_disk", 00:05:46.259 "ublk_start_disk", 00:05:46.259 "ublk_destroy_target", 00:05:46.259 "ublk_create_target", 00:05:46.259 "virtio_blk_create_transport", 00:05:46.259 "virtio_blk_get_transports", 00:05:46.259 "vhost_controller_set_coalescing", 00:05:46.259 "vhost_get_controllers", 00:05:46.259 "vhost_delete_controller", 00:05:46.259 "vhost_create_blk_controller", 00:05:46.259 "vhost_scsi_controller_remove_target", 00:05:46.259 "vhost_scsi_controller_add_target", 00:05:46.259 "vhost_start_scsi_controller", 00:05:46.259 "vhost_create_scsi_controller", 00:05:46.259 "thread_set_cpumask", 00:05:46.259 "scheduler_set_options", 00:05:46.259 "framework_get_governor", 00:05:46.259 "framework_get_scheduler", 00:05:46.259 "framework_set_scheduler", 00:05:46.259 "framework_get_reactors", 00:05:46.259 "thread_get_io_channels", 00:05:46.259 "thread_get_pollers", 00:05:46.259 "thread_get_stats", 00:05:46.259 "framework_monitor_context_switch", 00:05:46.259 "spdk_kill_instance", 00:05:46.259 "log_enable_timestamps", 00:05:46.259 "log_get_flags", 00:05:46.259 "log_clear_flag", 00:05:46.259 "log_set_flag", 00:05:46.259 "log_get_level", 00:05:46.259 "log_set_level", 00:05:46.259 "log_get_print_level", 00:05:46.259 "log_set_print_level", 00:05:46.259 "framework_enable_cpumask_locks", 00:05:46.259 "framework_disable_cpumask_locks", 00:05:46.259 "framework_wait_init", 00:05:46.259 "framework_start_init", 00:05:46.259 "scsi_get_devices", 00:05:46.259 "bdev_get_histogram", 00:05:46.259 "bdev_enable_histogram", 00:05:46.259 "bdev_set_qos_limit", 00:05:46.259 "bdev_set_qd_sampling_period", 00:05:46.259 "bdev_get_bdevs", 00:05:46.259 "bdev_reset_iostat", 00:05:46.259 "bdev_get_iostat", 00:05:46.259 "bdev_examine", 00:05:46.259 "bdev_wait_for_examine", 00:05:46.259 "bdev_set_options", 00:05:46.259 "accel_get_stats", 00:05:46.259 "accel_set_options", 00:05:46.259 "accel_set_driver", 00:05:46.259 "accel_crypto_key_destroy", 00:05:46.259 "accel_crypto_keys_get", 00:05:46.259 "accel_crypto_key_create", 00:05:46.259 "accel_assign_opc", 00:05:46.259 "accel_get_module_info", 00:05:46.259 "accel_get_opc_assignments", 00:05:46.259 "vmd_rescan", 00:05:46.259 "vmd_remove_device", 00:05:46.259 "vmd_enable", 00:05:46.259 "sock_get_default_impl", 00:05:46.259 "sock_set_default_impl", 00:05:46.259 "sock_impl_set_options", 00:05:46.259 "sock_impl_get_options", 00:05:46.260 "iobuf_get_stats", 00:05:46.260 "iobuf_set_options", 00:05:46.260 "keyring_get_keys", 00:05:46.260 "framework_get_pci_devices", 00:05:46.260 "framework_get_config", 00:05:46.260 "framework_get_subsystems", 00:05:46.260 "fsdev_set_opts", 00:05:46.260 "fsdev_get_opts", 00:05:46.260 "trace_get_info", 00:05:46.260 "trace_get_tpoint_group_mask", 00:05:46.260 "trace_disable_tpoint_group", 00:05:46.260 "trace_enable_tpoint_group", 00:05:46.260 "trace_clear_tpoint_mask", 00:05:46.260 "trace_set_tpoint_mask", 00:05:46.260 "notify_get_notifications", 00:05:46.260 "notify_get_types", 00:05:46.260 "spdk_get_version", 00:05:46.260 "rpc_get_methods" 00:05:46.260 ] 00:05:46.260 14:22:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:46.260 14:22:47 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:46.260 14:22:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.260 14:22:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:46.260 14:22:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57840 00:05:46.260 14:22:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57840 ']' 00:05:46.260 14:22:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57840 00:05:46.260 14:22:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:46.260 14:22:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.260 14:22:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57840 00:05:46.518 killing process with pid 57840 00:05:46.518 14:22:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.518 14:22:47 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.518 14:22:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57840' 00:05:46.518 14:22:47 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57840 00:05:46.518 14:22:47 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57840 00:05:49.087 ************************************ 00:05:49.087 END TEST spdkcli_tcp 00:05:49.087 ************************************ 00:05:49.087 00:05:49.087 real 0m4.278s 00:05:49.087 user 0m7.676s 00:05:49.087 sys 0m0.717s 00:05:49.087 14:22:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.087 14:22:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.087 14:22:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.087 14:22:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.087 14:22:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.087 14:22:49 -- common/autotest_common.sh@10 -- # set +x 00:05:49.087 ************************************ 00:05:49.087 START TEST dpdk_mem_utility 00:05:49.087 ************************************ 00:05:49.087 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.087 * Looking for test storage... 00:05:49.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:49.087 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.087 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.087 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.087 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.087 14:22:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.088 14:22:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.088 --rc genhtml_branch_coverage=1 00:05:49.088 --rc genhtml_function_coverage=1 00:05:49.088 --rc genhtml_legend=1 00:05:49.088 --rc geninfo_all_blocks=1 00:05:49.088 --rc geninfo_unexecuted_blocks=1 00:05:49.088 00:05:49.088 ' 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.088 --rc genhtml_branch_coverage=1 00:05:49.088 --rc genhtml_function_coverage=1 00:05:49.088 --rc genhtml_legend=1 00:05:49.088 --rc geninfo_all_blocks=1 00:05:49.088 --rc geninfo_unexecuted_blocks=1 00:05:49.088 00:05:49.088 ' 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.088 --rc genhtml_branch_coverage=1 00:05:49.088 --rc genhtml_function_coverage=1 00:05:49.088 --rc genhtml_legend=1 00:05:49.088 --rc geninfo_all_blocks=1 00:05:49.088 --rc geninfo_unexecuted_blocks=1 00:05:49.088 00:05:49.088 ' 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.088 --rc genhtml_branch_coverage=1 00:05:49.088 --rc genhtml_function_coverage=1 00:05:49.088 --rc genhtml_legend=1 00:05:49.088 --rc geninfo_all_blocks=1 00:05:49.088 --rc geninfo_unexecuted_blocks=1 00:05:49.088 00:05:49.088 ' 00:05:49.088 14:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.088 14:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57962 00:05:49.088 14:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.088 14:22:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57962 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57962 ']' 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.088 14:22:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.088 [2024-11-20 14:22:50.045170] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:49.088 [2024-11-20 14:22:50.045650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57962 ] 00:05:49.347 [2024-11-20 14:22:50.231011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.347 [2024-11-20 14:22:50.368603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.282 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.282 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:50.282 14:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.282 14:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.282 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.282 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.282 { 00:05:50.282 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.282 } 00:05:50.282 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.282 14:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.543 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:50.543 1 heaps totaling size 824.000000 MiB 00:05:50.543 size: 824.000000 MiB heap id: 0 00:05:50.543 end heaps---------- 00:05:50.543 9 mempools totaling size 603.782043 MiB 00:05:50.543 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.543 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.543 size: 100.555481 MiB name: bdev_io_57962 00:05:50.543 size: 50.003479 MiB name: msgpool_57962 00:05:50.543 size: 36.509338 MiB name: fsdev_io_57962 00:05:50.543 size: 21.763794 MiB name: PDU_Pool 00:05:50.543 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.543 size: 4.133484 MiB name: evtpool_57962 00:05:50.543 size: 0.026123 MiB name: Session_Pool 00:05:50.543 end mempools------- 00:05:50.543 6 memzones totaling size 4.142822 MiB 00:05:50.543 size: 1.000366 MiB name: RG_ring_0_57962 00:05:50.543 size: 1.000366 MiB name: RG_ring_1_57962 00:05:50.543 size: 1.000366 MiB name: RG_ring_4_57962 00:05:50.543 size: 1.000366 MiB name: RG_ring_5_57962 00:05:50.543 size: 0.125366 MiB name: RG_ring_2_57962 00:05:50.543 size: 0.015991 MiB name: RG_ring_3_57962 00:05:50.543 end memzones------- 00:05:50.543 14:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.543 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:05:50.543 list of free elements. size: 16.781860 MiB 00:05:50.543 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:50.543 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:50.543 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:50.543 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:50.543 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:50.543 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:50.543 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:50.543 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:50.543 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:50.543 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:50.543 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:50.543 element at address: 0x20001b400000 with size: 0.563171 MiB 00:05:50.543 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:50.543 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:50.543 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:50.543 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:50.543 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:50.543 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:50.543 list of standard malloc elements. size: 199.287231 MiB 00:05:50.543 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:50.543 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:50.543 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:50.543 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:50.543 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:50.543 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:50.543 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:50.543 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:50.543 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:50.543 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:50.543 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:50.543 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:50.543 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:50.544 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:50.545 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:50.545 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:50.545 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:50.545 list of memzone associated elements. size: 607.930908 MiB 00:05:50.545 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:50.545 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.545 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:50.545 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.545 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:50.545 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57962_0 00:05:50.545 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:50.545 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57962_0 00:05:50.545 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:50.545 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57962_0 00:05:50.546 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:50.546 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.546 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:50.546 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.546 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:50.546 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57962_0 00:05:50.546 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:50.546 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57962 00:05:50.546 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:50.546 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57962 00:05:50.546 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:50.546 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.546 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:50.546 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.546 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:50.546 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.546 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:50.546 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.546 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:50.546 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57962 00:05:50.546 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:50.546 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57962 00:05:50.546 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:50.546 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57962 00:05:50.546 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:50.546 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57962 00:05:50.546 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:50.546 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57962 00:05:50.546 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:50.546 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57962 00:05:50.546 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:50.546 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.546 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:50.546 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.546 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:50.546 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.546 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:50.546 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57962 00:05:50.546 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:50.546 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57962 00:05:50.546 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:50.546 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.546 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:50.546 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.546 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:50.546 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57962 00:05:50.546 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:50.546 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.546 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:50.546 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57962 00:05:50.546 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:50.546 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57962 00:05:50.546 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:50.546 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57962 00:05:50.546 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:50.546 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.546 14:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.546 14:22:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57962 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57962 ']' 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57962 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57962 00:05:50.546 killing process with pid 57962 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57962' 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57962 00:05:50.546 14:22:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57962 00:05:53.077 00:05:53.077 real 0m4.026s 00:05:53.077 user 0m3.997s 00:05:53.077 sys 0m0.681s 00:05:53.077 14:22:53 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.077 14:22:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.077 ************************************ 00:05:53.077 END TEST dpdk_mem_utility 00:05:53.077 ************************************ 00:05:53.077 14:22:53 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.077 14:22:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.077 14:22:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.077 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:05:53.077 ************************************ 00:05:53.077 START TEST event 00:05:53.077 ************************************ 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.077 * Looking for test storage... 00:05:53.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.077 14:22:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.077 14:22:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.077 14:22:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.077 14:22:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.077 14:22:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.077 14:22:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.077 14:22:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.077 14:22:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.077 14:22:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.077 14:22:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.077 14:22:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.077 14:22:53 event -- scripts/common.sh@344 -- # case "$op" in 00:05:53.077 14:22:53 event -- scripts/common.sh@345 -- # : 1 00:05:53.077 14:22:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.077 14:22:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.077 14:22:53 event -- scripts/common.sh@365 -- # decimal 1 00:05:53.077 14:22:53 event -- scripts/common.sh@353 -- # local d=1 00:05:53.077 14:22:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.077 14:22:53 event -- scripts/common.sh@355 -- # echo 1 00:05:53.077 14:22:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.077 14:22:53 event -- scripts/common.sh@366 -- # decimal 2 00:05:53.077 14:22:53 event -- scripts/common.sh@353 -- # local d=2 00:05:53.077 14:22:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.077 14:22:53 event -- scripts/common.sh@355 -- # echo 2 00:05:53.077 14:22:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.077 14:22:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.077 14:22:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.077 14:22:53 event -- scripts/common.sh@368 -- # return 0 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.077 --rc genhtml_branch_coverage=1 00:05:53.077 --rc genhtml_function_coverage=1 00:05:53.077 --rc genhtml_legend=1 00:05:53.077 --rc geninfo_all_blocks=1 00:05:53.077 --rc geninfo_unexecuted_blocks=1 00:05:53.077 00:05:53.077 ' 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.077 --rc genhtml_branch_coverage=1 00:05:53.077 --rc genhtml_function_coverage=1 00:05:53.077 --rc genhtml_legend=1 00:05:53.077 --rc geninfo_all_blocks=1 00:05:53.077 --rc geninfo_unexecuted_blocks=1 00:05:53.077 00:05:53.077 ' 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.077 --rc genhtml_branch_coverage=1 00:05:53.077 --rc genhtml_function_coverage=1 00:05:53.077 --rc genhtml_legend=1 00:05:53.077 --rc geninfo_all_blocks=1 00:05:53.077 --rc geninfo_unexecuted_blocks=1 00:05:53.077 00:05:53.077 ' 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.077 --rc genhtml_branch_coverage=1 00:05:53.077 --rc genhtml_function_coverage=1 00:05:53.077 --rc genhtml_legend=1 00:05:53.077 --rc geninfo_all_blocks=1 00:05:53.077 --rc geninfo_unexecuted_blocks=1 00:05:53.077 00:05:53.077 ' 00:05:53.077 14:22:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.077 14:22:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.077 14:22:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:53.077 14:22:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.077 14:22:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.077 ************************************ 00:05:53.078 START TEST event_perf 00:05:53.078 ************************************ 00:05:53.078 14:22:53 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.078 Running I/O for 1 seconds...[2024-11-20 14:22:54.040603] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:53.078 [2024-11-20 14:22:54.040995] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58070 ] 00:05:53.336 [2024-11-20 14:22:54.229187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.336 [2024-11-20 14:22:54.372583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.336 [2024-11-20 14:22:54.372728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.336 [2024-11-20 14:22:54.372842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.336 Running I/O for 1 seconds...[2024-11-20 14:22:54.372867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.712 00:05:54.712 lcore 0: 191260 00:05:54.712 lcore 1: 191259 00:05:54.712 lcore 2: 191259 00:05:54.712 lcore 3: 191260 00:05:54.712 done. 00:05:54.712 00:05:54.712 real 0m1.625s 00:05:54.712 user 0m4.377s 00:05:54.712 sys 0m0.121s 00:05:54.712 ************************************ 00:05:54.712 END TEST event_perf 00:05:54.712 ************************************ 00:05:54.712 14:22:55 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.712 14:22:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.712 14:22:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.712 14:22:55 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:54.712 14:22:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.712 14:22:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.712 ************************************ 00:05:54.712 START TEST event_reactor 00:05:54.712 ************************************ 00:05:54.712 14:22:55 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:54.712 [2024-11-20 14:22:55.709475] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:54.712 [2024-11-20 14:22:55.709821] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58115 ] 00:05:54.970 [2024-11-20 14:22:55.882957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.971 [2024-11-20 14:22:56.014817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.345 test_start 00:05:56.345 oneshot 00:05:56.345 tick 100 00:05:56.345 tick 100 00:05:56.345 tick 250 00:05:56.345 tick 100 00:05:56.345 tick 100 00:05:56.345 tick 250 00:05:56.345 tick 100 00:05:56.345 tick 500 00:05:56.345 tick 100 00:05:56.345 tick 100 00:05:56.345 tick 250 00:05:56.345 tick 100 00:05:56.345 tick 100 00:05:56.345 test_end 00:05:56.345 00:05:56.345 real 0m1.572s 00:05:56.345 user 0m1.367s 00:05:56.345 sys 0m0.097s 00:05:56.345 14:22:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.345 ************************************ 00:05:56.345 END TEST event_reactor 00:05:56.345 ************************************ 00:05:56.345 14:22:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.345 14:22:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.345 14:22:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:56.345 14:22:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.345 14:22:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.345 ************************************ 00:05:56.345 START TEST event_reactor_perf 00:05:56.345 ************************************ 00:05:56.345 14:22:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.345 [2024-11-20 14:22:57.343698] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:56.345 [2024-11-20 14:22:57.344120] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58146 ] 00:05:56.603 [2024-11-20 14:22:57.530677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.862 [2024-11-20 14:22:57.682641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.321 test_start 00:05:58.321 test_end 00:05:58.321 Performance: 284396 events per second 00:05:58.321 00:05:58.321 real 0m1.619s 00:05:58.321 user 0m1.401s 00:05:58.321 sys 0m0.109s 00:05:58.321 ************************************ 00:05:58.321 END TEST event_reactor_perf 00:05:58.321 ************************************ 00:05:58.321 14:22:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.321 14:22:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.321 14:22:58 event -- event/event.sh@49 -- # uname -s 00:05:58.321 14:22:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.321 14:22:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:58.321 14:22:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.321 14:22:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.321 14:22:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.321 ************************************ 00:05:58.321 START TEST event_scheduler 00:05:58.321 ************************************ 00:05:58.321 14:22:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:58.321 * Looking for test storage... 00:05:58.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.321 14:22:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.321 --rc genhtml_branch_coverage=1 00:05:58.321 --rc genhtml_function_coverage=1 00:05:58.321 --rc genhtml_legend=1 00:05:58.321 --rc geninfo_all_blocks=1 00:05:58.321 --rc geninfo_unexecuted_blocks=1 00:05:58.321 00:05:58.321 ' 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.321 --rc genhtml_branch_coverage=1 00:05:58.321 --rc genhtml_function_coverage=1 00:05:58.321 --rc genhtml_legend=1 00:05:58.321 --rc geninfo_all_blocks=1 00:05:58.321 --rc geninfo_unexecuted_blocks=1 00:05:58.321 00:05:58.321 ' 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.321 --rc genhtml_branch_coverage=1 00:05:58.321 --rc genhtml_function_coverage=1 00:05:58.321 --rc genhtml_legend=1 00:05:58.321 --rc geninfo_all_blocks=1 00:05:58.321 --rc geninfo_unexecuted_blocks=1 00:05:58.321 00:05:58.321 ' 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.321 --rc genhtml_branch_coverage=1 00:05:58.321 --rc genhtml_function_coverage=1 00:05:58.321 --rc genhtml_legend=1 00:05:58.321 --rc geninfo_all_blocks=1 00:05:58.321 --rc geninfo_unexecuted_blocks=1 00:05:58.321 00:05:58.321 ' 00:05:58.321 14:22:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:58.321 14:22:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58222 00:05:58.321 14:22:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.321 14:22:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:58.321 14:22:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58222 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58222 ']' 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.321 14:22:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.321 [2024-11-20 14:22:59.272119] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:05:58.321 [2024-11-20 14:22:59.272564] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58222 ] 00:05:58.580 [2024-11-20 14:22:59.464982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.580 [2024-11-20 14:22:59.629233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.580 [2024-11-20 14:22:59.629313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.580 [2024-11-20 14:22:59.629413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.580 [2024-11-20 14:22:59.629441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.515 14:23:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.515 14:23:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:59.515 14:23:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.515 14:23:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.515 14:23:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.515 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.515 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.515 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.515 POWER: Cannot set governor of lcore 0 to performance 00:05:59.515 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.515 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.515 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.515 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.515 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:59.515 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:59.515 POWER: Unable to set Power Management Environment for lcore 0 00:05:59.515 [2024-11-20 14:23:00.303891] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:59.515 [2024-11-20 14:23:00.303926] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:59.515 [2024-11-20 14:23:00.303941] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:59.515 [2024-11-20 14:23:00.303969] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.515 [2024-11-20 14:23:00.303983] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.515 [2024-11-20 14:23:00.304006] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.515 14:23:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.515 14:23:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.515 14:23:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.515 14:23:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 [2024-11-20 14:23:00.651729] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.774 14:23:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.774 14:23:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.774 14:23:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 ************************************ 00:05:59.774 START TEST scheduler_create_thread 00:05:59.774 ************************************ 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 2 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 3 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 4 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 5 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 6 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 7 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 8 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 9 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 10 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.774 14:23:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.672 14:23:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.672 14:23:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.672 14:23:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.672 14:23:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.672 14:23:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.238 ************************************ 00:06:02.238 END TEST scheduler_create_thread 00:06:02.238 ************************************ 00:06:02.238 14:23:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.238 00:06:02.238 real 0m2.622s 00:06:02.238 user 0m0.019s 00:06:02.238 sys 0m0.006s 00:06:02.238 14:23:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.238 14:23:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.496 14:23:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.496 14:23:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58222 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58222 ']' 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58222 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58222 00:06:02.496 killing process with pid 58222 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58222' 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58222 00:06:02.496 14:23:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58222 00:06:02.755 [2024-11-20 14:23:03.767010] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.131 00:06:04.131 real 0m5.902s 00:06:04.131 user 0m10.421s 00:06:04.131 sys 0m0.520s 00:06:04.131 14:23:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.131 14:23:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.131 ************************************ 00:06:04.131 END TEST event_scheduler 00:06:04.131 ************************************ 00:06:04.131 14:23:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.131 14:23:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.131 14:23:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.131 14:23:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.131 14:23:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.131 ************************************ 00:06:04.131 START TEST app_repeat 00:06:04.131 ************************************ 00:06:04.131 14:23:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:04.131 Process app_repeat pid: 58338 00:06:04.131 spdk_app_start Round 0 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58338 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58338' 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.131 14:23:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:06:04.131 14:23:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58338 ']' 00:06:04.131 14:23:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.131 14:23:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.131 14:23:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.131 14:23:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.131 14:23:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.131 [2024-11-20 14:23:04.988002] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:04.131 [2024-11-20 14:23:04.988179] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58338 ] 00:06:04.131 [2024-11-20 14:23:05.165186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.389 [2024-11-20 14:23:05.299903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.389 [2024-11-20 14:23:05.299905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.323 14:23:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.323 14:23:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.323 14:23:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.581 Malloc0 00:06:05.581 14:23:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.839 Malloc1 00:06:05.839 14:23:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.839 14:23:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.096 /dev/nbd0 00:06:06.096 14:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.096 14:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.096 14:23:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.096 1+0 records in 00:06:06.096 1+0 records out 00:06:06.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402668 s, 10.2 MB/s 00:06:06.097 14:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.097 14:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.097 14:23:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.097 14:23:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.097 14:23:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.097 14:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.097 14:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.097 14:23:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.358 /dev/nbd1 00:06:06.358 14:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.358 14:23:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.616 1+0 records in 00:06:06.616 1+0 records out 00:06:06.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377017 s, 10.9 MB/s 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.616 14:23:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.616 14:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.616 14:23:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.616 14:23:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.616 14:23:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.616 14:23:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.875 { 00:06:06.875 "nbd_device": "/dev/nbd0", 00:06:06.875 "bdev_name": "Malloc0" 00:06:06.875 }, 00:06:06.875 { 00:06:06.875 "nbd_device": "/dev/nbd1", 00:06:06.875 "bdev_name": "Malloc1" 00:06:06.875 } 00:06:06.875 ]' 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.875 { 00:06:06.875 "nbd_device": "/dev/nbd0", 00:06:06.875 "bdev_name": "Malloc0" 00:06:06.875 }, 00:06:06.875 { 00:06:06.875 "nbd_device": "/dev/nbd1", 00:06:06.875 "bdev_name": "Malloc1" 00:06:06.875 } 00:06:06.875 ]' 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.875 /dev/nbd1' 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.875 /dev/nbd1' 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.875 256+0 records in 00:06:06.875 256+0 records out 00:06:06.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743179 s, 141 MB/s 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.875 256+0 records in 00:06:06.875 256+0 records out 00:06:06.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026661 s, 39.3 MB/s 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.875 256+0 records in 00:06:06.875 256+0 records out 00:06:06.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029437 s, 35.6 MB/s 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.875 14:23:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.876 14:23:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.443 14:23:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.701 14:23:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.959 14:23:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.959 14:23:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.525 14:23:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.463 [2024-11-20 14:23:10.472807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.722 [2024-11-20 14:23:10.607580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.722 [2024-11-20 14:23:10.607573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.981 [2024-11-20 14:23:10.803425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.981 [2024-11-20 14:23:10.803594] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.357 14:23:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.357 spdk_app_start Round 1 00:06:11.357 14:23:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.357 14:23:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:06:11.357 14:23:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58338 ']' 00:06:11.357 14:23:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.357 14:23:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.357 14:23:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.357 14:23:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.357 14:23:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.924 14:23:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.924 14:23:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:11.924 14:23:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.188 Malloc0 00:06:12.188 14:23:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.458 Malloc1 00:06:12.458 14:23:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.458 14:23:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.718 /dev/nbd0 00:06:12.718 14:23:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.718 14:23:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.718 1+0 records in 00:06:12.718 1+0 records out 00:06:12.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031446 s, 13.0 MB/s 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.718 14:23:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:12.718 14:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.718 14:23:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.718 14:23:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.285 /dev/nbd1 00:06:13.285 14:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.285 14:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.285 1+0 records in 00:06:13.285 1+0 records out 00:06:13.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409213 s, 10.0 MB/s 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:13.285 14:23:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:13.285 14:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.285 14:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.285 14:23:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.285 14:23:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.285 14:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.544 { 00:06:13.544 "nbd_device": "/dev/nbd0", 00:06:13.544 "bdev_name": "Malloc0" 00:06:13.544 }, 00:06:13.544 { 00:06:13.544 "nbd_device": "/dev/nbd1", 00:06:13.544 "bdev_name": "Malloc1" 00:06:13.544 } 00:06:13.544 ]' 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.544 { 00:06:13.544 "nbd_device": "/dev/nbd0", 00:06:13.544 "bdev_name": "Malloc0" 00:06:13.544 }, 00:06:13.544 { 00:06:13.544 "nbd_device": "/dev/nbd1", 00:06:13.544 "bdev_name": "Malloc1" 00:06:13.544 } 00:06:13.544 ]' 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.544 /dev/nbd1' 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.544 /dev/nbd1' 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.544 256+0 records in 00:06:13.544 256+0 records out 00:06:13.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642065 s, 163 MB/s 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.544 256+0 records in 00:06:13.544 256+0 records out 00:06:13.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271529 s, 38.6 MB/s 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.544 14:23:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.545 256+0 records in 00:06:13.545 256+0 records out 00:06:13.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295168 s, 35.5 MB/s 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.545 14:23:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.111 14:23:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.111 14:23:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.112 14:23:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.370 14:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.629 14:23:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.629 14:23:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.196 14:23:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.178 [2024-11-20 14:23:17.182410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.437 [2024-11-20 14:23:17.319017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.437 [2024-11-20 14:23:17.319019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.696 [2024-11-20 14:23:17.516548] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.696 [2024-11-20 14:23:17.516637] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.072 14:23:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.072 spdk_app_start Round 2 00:06:18.072 14:23:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:18.072 14:23:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:06:18.072 14:23:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58338 ']' 00:06:18.072 14:23:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.072 14:23:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.072 14:23:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.072 14:23:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.072 14:23:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.636 14:23:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.636 14:23:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:18.636 14:23:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.895 Malloc0 00:06:18.895 14:23:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.154 Malloc1 00:06:19.154 14:23:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.154 14:23:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.722 /dev/nbd0 00:06:19.722 14:23:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.722 14:23:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.722 1+0 records in 00:06:19.722 1+0 records out 00:06:19.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300024 s, 13.7 MB/s 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.722 14:23:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.722 14:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.722 14:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.722 14:23:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.981 /dev/nbd1 00:06:19.981 14:23:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.981 14:23:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.981 1+0 records in 00:06:19.981 1+0 records out 00:06:19.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544373 s, 7.5 MB/s 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.981 14:23:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:19.981 14:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.981 14:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.981 14:23:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.981 14:23:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.981 14:23:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.240 { 00:06:20.240 "nbd_device": "/dev/nbd0", 00:06:20.240 "bdev_name": "Malloc0" 00:06:20.240 }, 00:06:20.240 { 00:06:20.240 "nbd_device": "/dev/nbd1", 00:06:20.240 "bdev_name": "Malloc1" 00:06:20.240 } 00:06:20.240 ]' 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.240 { 00:06:20.240 "nbd_device": "/dev/nbd0", 00:06:20.240 "bdev_name": "Malloc0" 00:06:20.240 }, 00:06:20.240 { 00:06:20.240 "nbd_device": "/dev/nbd1", 00:06:20.240 "bdev_name": "Malloc1" 00:06:20.240 } 00:06:20.240 ]' 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.240 /dev/nbd1' 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.240 /dev/nbd1' 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.240 14:23:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.498 256+0 records in 00:06:20.498 256+0 records out 00:06:20.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00717605 s, 146 MB/s 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.498 256+0 records in 00:06:20.498 256+0 records out 00:06:20.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030546 s, 34.3 MB/s 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.498 256+0 records in 00:06:20.498 256+0 records out 00:06:20.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348802 s, 30.1 MB/s 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.498 14:23:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.756 14:23:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.323 14:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.323 14:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.323 14:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.323 14:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.323 14:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.323 14:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.324 14:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.324 14:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.324 14:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.324 14:23:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.324 14:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.582 14:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.582 14:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.582 14:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.582 14:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.583 14:23:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.583 14:23:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.151 14:23:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:23.528 [2024-11-20 14:23:24.168575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.528 [2024-11-20 14:23:24.304600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.528 [2024-11-20 14:23:24.304613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.528 [2024-11-20 14:23:24.502788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.528 [2024-11-20 14:23:24.502950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.960 14:23:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58338 /var/tmp/spdk-nbd.sock 00:06:24.960 14:23:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58338 ']' 00:06:24.960 14:23:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.960 14:23:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.960 14:23:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.961 14:23:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.961 14:23:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:25.527 14:23:26 event.app_repeat -- event/event.sh@39 -- # killprocess 58338 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58338 ']' 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58338 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58338 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.527 killing process with pid 58338 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58338' 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58338 00:06:25.527 14:23:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58338 00:06:26.462 spdk_app_start is called in Round 0. 00:06:26.462 Shutdown signal received, stop current app iteration 00:06:26.462 Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 reinitialization... 00:06:26.462 spdk_app_start is called in Round 1. 00:06:26.462 Shutdown signal received, stop current app iteration 00:06:26.462 Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 reinitialization... 00:06:26.462 spdk_app_start is called in Round 2. 00:06:26.462 Shutdown signal received, stop current app iteration 00:06:26.462 Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 reinitialization... 00:06:26.462 spdk_app_start is called in Round 3. 00:06:26.462 Shutdown signal received, stop current app iteration 00:06:26.462 14:23:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.462 14:23:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:26.462 00:06:26.462 real 0m22.494s 00:06:26.462 user 0m50.026s 00:06:26.462 sys 0m3.376s 00:06:26.462 14:23:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.462 14:23:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.462 ************************************ 00:06:26.462 END TEST app_repeat 00:06:26.462 ************************************ 00:06:26.462 14:23:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.462 14:23:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.462 14:23:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.462 14:23:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.462 14:23:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.462 ************************************ 00:06:26.462 START TEST cpu_locks 00:06:26.462 ************************************ 00:06:26.462 14:23:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.721 * Looking for test storage... 00:06:26.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:26.721 14:23:27 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.721 14:23:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.721 14:23:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.721 14:23:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.721 14:23:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:26.721 14:23:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.721 14:23:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.721 --rc genhtml_branch_coverage=1 00:06:26.721 --rc genhtml_function_coverage=1 00:06:26.721 --rc genhtml_legend=1 00:06:26.721 --rc geninfo_all_blocks=1 00:06:26.721 --rc geninfo_unexecuted_blocks=1 00:06:26.721 00:06:26.721 ' 00:06:26.722 14:23:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.722 --rc genhtml_branch_coverage=1 00:06:26.722 --rc genhtml_function_coverage=1 00:06:26.722 --rc genhtml_legend=1 00:06:26.722 --rc geninfo_all_blocks=1 00:06:26.722 --rc geninfo_unexecuted_blocks=1 00:06:26.722 00:06:26.722 ' 00:06:26.722 14:23:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.722 --rc genhtml_branch_coverage=1 00:06:26.722 --rc genhtml_function_coverage=1 00:06:26.722 --rc genhtml_legend=1 00:06:26.722 --rc geninfo_all_blocks=1 00:06:26.722 --rc geninfo_unexecuted_blocks=1 00:06:26.722 00:06:26.722 ' 00:06:26.722 14:23:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.722 --rc genhtml_branch_coverage=1 00:06:26.722 --rc genhtml_function_coverage=1 00:06:26.722 --rc genhtml_legend=1 00:06:26.722 --rc geninfo_all_blocks=1 00:06:26.722 --rc geninfo_unexecuted_blocks=1 00:06:26.722 00:06:26.722 ' 00:06:26.722 14:23:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.722 14:23:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.722 14:23:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.722 14:23:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.722 14:23:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.722 14:23:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.722 14:23:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.722 ************************************ 00:06:26.722 START TEST default_locks 00:06:26.722 ************************************ 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58814 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58814 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58814 ']' 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.722 14:23:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.981 [2024-11-20 14:23:27.832857] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:26.981 [2024-11-20 14:23:27.833077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:06:26.981 [2024-11-20 14:23:28.025774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.239 [2024-11-20 14:23:28.250138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.175 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.175 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:28.175 14:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58814 00:06:28.175 14:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58814 00:06:28.175 14:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58814 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58814 ']' 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58814 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58814 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.800 killing process with pid 58814 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58814' 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58814 00:06:28.800 14:23:29 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58814 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58814 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58814 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58814 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58814 ']' 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.339 ERROR: process (pid: 58814) is no longer running 00:06:31.339 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58814) - No such process 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.339 00:06:31.339 real 0m4.211s 00:06:31.339 user 0m4.179s 00:06:31.339 sys 0m0.789s 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.339 14:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.339 ************************************ 00:06:31.339 END TEST default_locks 00:06:31.339 ************************************ 00:06:31.339 14:23:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:31.339 14:23:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.339 14:23:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.339 14:23:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.339 ************************************ 00:06:31.339 START TEST default_locks_via_rpc 00:06:31.339 ************************************ 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58891 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58891 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.339 14:23:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.339 [2024-11-20 14:23:32.093983] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:31.339 [2024-11-20 14:23:32.094199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:06:31.339 [2024-11-20 14:23:32.283120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.597 [2024-11-20 14:23:32.417828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58891 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58891 00:06:32.532 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58891 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58891 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:32.792 killing process with pid 58891 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58891 00:06:32.792 14:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58891 00:06:35.326 00:06:35.326 real 0m4.126s 00:06:35.326 user 0m4.074s 00:06:35.326 sys 0m0.792s 00:06:35.326 14:23:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.326 14:23:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.326 ************************************ 00:06:35.326 END TEST default_locks_via_rpc 00:06:35.326 ************************************ 00:06:35.326 14:23:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.326 14:23:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.326 14:23:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.326 14:23:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.326 ************************************ 00:06:35.326 START TEST non_locking_app_on_locked_coremask 00:06:35.326 ************************************ 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58965 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58965 /var/tmp/spdk.sock 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58965 ']' 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.326 14:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.326 [2024-11-20 14:23:36.272598] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:35.326 [2024-11-20 14:23:36.272810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58965 ] 00:06:35.585 [2024-11-20 14:23:36.461124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.585 [2024-11-20 14:23:36.593770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58992 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58992 /var/tmp/spdk2.sock 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58992 ']' 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.518 14:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.776 [2024-11-20 14:23:37.640121] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:36.776 [2024-11-20 14:23:37.640948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58992 ] 00:06:37.033 [2024-11-20 14:23:37.849795] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.033 [2024-11-20 14:23:37.849894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.291 [2024-11-20 14:23:38.125583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.830 14:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.830 14:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.830 14:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58965 00:06:39.830 14:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58965 00:06:39.830 14:23:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58965 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58965 ']' 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58965 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58965 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.396 killing process with pid 58965 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58965' 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58965 00:06:40.396 14:23:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58965 00:06:45.662 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58992 00:06:45.662 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58992 ']' 00:06:45.662 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58992 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58992 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.663 killing process with pid 58992 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58992' 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58992 00:06:45.663 14:23:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58992 00:06:47.644 ************************************ 00:06:47.644 END TEST non_locking_app_on_locked_coremask 00:06:47.644 ************************************ 00:06:47.644 00:06:47.644 real 0m12.061s 00:06:47.644 user 0m12.551s 00:06:47.644 sys 0m1.591s 00:06:47.644 14:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.644 14:23:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.644 14:23:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.644 14:23:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.644 14:23:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.644 14:23:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.644 ************************************ 00:06:47.644 START TEST locking_app_on_unlocked_coremask 00:06:47.644 ************************************ 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59143 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59143 /var/tmp/spdk.sock 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59143 ']' 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.644 14:23:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.644 [2024-11-20 14:23:48.365055] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:47.644 [2024-11-20 14:23:48.365213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:06:47.644 [2024-11-20 14:23:48.544242] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.644 [2024-11-20 14:23:48.544329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.644 [2024-11-20 14:23:48.687068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59165 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59165 /var/tmp/spdk2.sock 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59165 ']' 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.580 14:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.839 [2024-11-20 14:23:49.754799] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:48.839 [2024-11-20 14:23:49.755274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59165 ] 00:06:49.099 [2024-11-20 14:23:49.973234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.357 [2024-11-20 14:23:50.271623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.888 14:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.888 14:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:51.888 14:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59165 00:06:51.888 14:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59165 00:06:51.888 14:23:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59143 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59143 ']' 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59143 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59143 00:06:52.455 killing process with pid 59143 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59143' 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59143 00:06:52.455 14:23:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59143 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59165 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59165 ']' 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59165 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59165 00:06:57.762 killing process with pid 59165 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59165' 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59165 00:06:57.762 14:23:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59165 00:06:59.764 00:06:59.764 real 0m12.103s 00:06:59.764 user 0m12.698s 00:06:59.764 sys 0m1.628s 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.764 ************************************ 00:06:59.764 END TEST locking_app_on_unlocked_coremask 00:06:59.764 ************************************ 00:06:59.764 14:24:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.764 14:24:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.764 14:24:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.764 14:24:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.764 ************************************ 00:06:59.764 START TEST locking_app_on_locked_coremask 00:06:59.764 ************************************ 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59319 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59319 /var/tmp/spdk.sock 00:06:59.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59319 ']' 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.764 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.765 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.765 14:24:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.765 [2024-11-20 14:24:00.540499] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:06:59.765 [2024-11-20 14:24:00.540709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:06:59.765 [2024-11-20 14:24:00.725622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.023 [2024-11-20 14:24:00.858138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59336 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59336 /var/tmp/spdk2.sock 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59336 /var/tmp/spdk2.sock 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.958 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59336 /var/tmp/spdk2.sock 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59336 ']' 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.959 14:24:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.959 [2024-11-20 14:24:01.862598] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:00.959 [2024-11-20 14:24:01.862943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59336 ] 00:07:01.216 [2024-11-20 14:24:02.058090] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59319 has claimed it. 00:07:01.216 [2024-11-20 14:24:02.058177] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:01.782 ERROR: process (pid: 59336) is no longer running 00:07:01.782 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59336) - No such process 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59319 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59319 00:07:01.782 14:24:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59319 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59319 ']' 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59319 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59319 00:07:02.044 killing process with pid 59319 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59319' 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59319 00:07:02.044 14:24:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59319 00:07:04.635 00:07:04.635 real 0m4.953s 00:07:04.635 user 0m5.348s 00:07:04.635 sys 0m0.930s 00:07:04.635 14:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.635 14:24:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.635 ************************************ 00:07:04.635 END TEST locking_app_on_locked_coremask 00:07:04.635 ************************************ 00:07:04.635 14:24:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.635 14:24:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.635 14:24:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.635 14:24:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.635 ************************************ 00:07:04.635 START TEST locking_overlapped_coremask 00:07:04.635 ************************************ 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:04.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59406 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59406 /var/tmp/spdk.sock 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59406 ']' 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.635 14:24:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.635 [2024-11-20 14:24:05.583530] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:04.635 [2024-11-20 14:24:05.583998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59406 ] 00:07:04.893 [2024-11-20 14:24:05.775118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.893 [2024-11-20 14:24:05.940425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.893 [2024-11-20 14:24:05.940526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.893 [2024-11-20 14:24:05.940539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59429 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59429 /var/tmp/spdk2.sock 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59429 /var/tmp/spdk2.sock 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:05.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59429 /var/tmp/spdk2.sock 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59429 ']' 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.827 14:24:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.085 [2024-11-20 14:24:07.021253] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:06.085 [2024-11-20 14:24:07.021505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:07:06.342 [2024-11-20 14:24:07.230342] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59406 has claimed it. 00:07:06.342 [2024-11-20 14:24:07.230424] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.916 ERROR: process (pid: 59429) is no longer running 00:07:06.916 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59429) - No such process 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59406 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59406 ']' 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59406 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59406 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59406' 00:07:06.916 killing process with pid 59406 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59406 00:07:06.916 14:24:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59406 00:07:09.450 00:07:09.450 real 0m4.623s 00:07:09.450 user 0m12.526s 00:07:09.450 sys 0m0.781s 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.450 ************************************ 00:07:09.450 END TEST locking_overlapped_coremask 00:07:09.450 ************************************ 00:07:09.450 14:24:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.450 14:24:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.450 14:24:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.450 14:24:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.450 ************************************ 00:07:09.450 START TEST locking_overlapped_coremask_via_rpc 00:07:09.450 ************************************ 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:09.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59499 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59499 /var/tmp/spdk.sock 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59499 ']' 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.450 14:24:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.450 [2024-11-20 14:24:10.221602] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:09.450 [2024-11-20 14:24:10.222549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:07:09.450 [2024-11-20 14:24:10.419701] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.450 [2024-11-20 14:24:10.420017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.707 [2024-11-20 14:24:10.585863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.707 [2024-11-20 14:24:10.586026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.707 [2024-11-20 14:24:10.586036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59517 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59517 /var/tmp/spdk2.sock 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59517 ']' 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.641 14:24:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.900 [2024-11-20 14:24:11.737218] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:10.900 [2024-11-20 14:24:11.737446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59517 ] 00:07:10.900 [2024-11-20 14:24:11.948326] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.900 [2024-11-20 14:24:11.948390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.463 [2024-11-20 14:24:12.243438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.463 [2024-11-20 14:24:12.243514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.463 [2024-11-20 14:24:12.243534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.044 [2024-11-20 14:24:14.598851] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59499 has claimed it. 00:07:14.044 request: 00:07:14.044 { 00:07:14.044 "method": "framework_enable_cpumask_locks", 00:07:14.044 "req_id": 1 00:07:14.044 } 00:07:14.044 Got JSON-RPC error response 00:07:14.044 response: 00:07:14.044 { 00:07:14.044 "code": -32603, 00:07:14.044 "message": "Failed to claim CPU core: 2" 00:07:14.044 } 00:07:14.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59499 /var/tmp/spdk.sock 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59499 ']' 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59517 /var/tmp/spdk2.sock 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59517 ']' 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.044 14:24:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.302 ************************************ 00:07:14.302 END TEST locking_overlapped_coremask_via_rpc 00:07:14.302 ************************************ 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.302 00:07:14.302 real 0m5.132s 00:07:14.302 user 0m1.934s 00:07:14.302 sys 0m0.251s 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.302 14:24:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.302 14:24:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:14.302 14:24:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59499 ]] 00:07:14.302 14:24:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59499 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59499 ']' 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59499 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59499 00:07:14.302 killing process with pid 59499 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59499' 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59499 00:07:14.302 14:24:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59499 00:07:16.833 14:24:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59517 ]] 00:07:16.833 14:24:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59517 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59517 ']' 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59517 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59517 00:07:16.833 killing process with pid 59517 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59517' 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59517 00:07:16.833 14:24:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59517 00:07:19.365 14:24:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.365 14:24:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:19.365 14:24:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59499 ]] 00:07:19.365 14:24:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59499 00:07:19.365 14:24:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59499 ']' 00:07:19.365 14:24:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59499 00:07:19.365 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59499) - No such process 00:07:19.365 Process with pid 59499 is not found 00:07:19.365 14:24:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59499 is not found' 00:07:19.365 14:24:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59517 ]] 00:07:19.365 14:24:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59517 00:07:19.365 14:24:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59517 ']' 00:07:19.365 14:24:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59517 00:07:19.365 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59517) - No such process 00:07:19.365 Process with pid 59517 is not found 00:07:19.365 14:24:19 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59517 is not found' 00:07:19.365 14:24:20 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.365 00:07:19.365 real 0m52.520s 00:07:19.365 user 1m31.073s 00:07:19.365 sys 0m8.122s 00:07:19.365 ************************************ 00:07:19.365 END TEST cpu_locks 00:07:19.365 ************************************ 00:07:19.365 14:24:20 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.365 14:24:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 ************************************ 00:07:19.365 END TEST event 00:07:19.365 ************************************ 00:07:19.365 00:07:19.365 real 1m26.254s 00:07:19.365 user 2m38.904s 00:07:19.365 sys 0m12.604s 00:07:19.365 14:24:20 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.365 14:24:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 14:24:20 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.365 14:24:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.365 14:24:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.366 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:07:19.366 ************************************ 00:07:19.366 START TEST thread 00:07:19.366 ************************************ 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.366 * Looking for test storage... 00:07:19.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:19.366 14:24:20 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.366 14:24:20 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.366 14:24:20 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.366 14:24:20 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.366 14:24:20 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.366 14:24:20 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.366 14:24:20 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.366 14:24:20 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.366 14:24:20 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.366 14:24:20 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.366 14:24:20 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.366 14:24:20 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:19.366 14:24:20 thread -- scripts/common.sh@345 -- # : 1 00:07:19.366 14:24:20 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.366 14:24:20 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.366 14:24:20 thread -- scripts/common.sh@365 -- # decimal 1 00:07:19.366 14:24:20 thread -- scripts/common.sh@353 -- # local d=1 00:07:19.366 14:24:20 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.366 14:24:20 thread -- scripts/common.sh@355 -- # echo 1 00:07:19.366 14:24:20 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.366 14:24:20 thread -- scripts/common.sh@366 -- # decimal 2 00:07:19.366 14:24:20 thread -- scripts/common.sh@353 -- # local d=2 00:07:19.366 14:24:20 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.366 14:24:20 thread -- scripts/common.sh@355 -- # echo 2 00:07:19.366 14:24:20 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.366 14:24:20 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.366 14:24:20 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.366 14:24:20 thread -- scripts/common.sh@368 -- # return 0 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.366 --rc genhtml_branch_coverage=1 00:07:19.366 --rc genhtml_function_coverage=1 00:07:19.366 --rc genhtml_legend=1 00:07:19.366 --rc geninfo_all_blocks=1 00:07:19.366 --rc geninfo_unexecuted_blocks=1 00:07:19.366 00:07:19.366 ' 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.366 --rc genhtml_branch_coverage=1 00:07:19.366 --rc genhtml_function_coverage=1 00:07:19.366 --rc genhtml_legend=1 00:07:19.366 --rc geninfo_all_blocks=1 00:07:19.366 --rc geninfo_unexecuted_blocks=1 00:07:19.366 00:07:19.366 ' 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.366 --rc genhtml_branch_coverage=1 00:07:19.366 --rc genhtml_function_coverage=1 00:07:19.366 --rc genhtml_legend=1 00:07:19.366 --rc geninfo_all_blocks=1 00:07:19.366 --rc geninfo_unexecuted_blocks=1 00:07:19.366 00:07:19.366 ' 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.366 --rc genhtml_branch_coverage=1 00:07:19.366 --rc genhtml_function_coverage=1 00:07:19.366 --rc genhtml_legend=1 00:07:19.366 --rc geninfo_all_blocks=1 00:07:19.366 --rc geninfo_unexecuted_blocks=1 00:07:19.366 00:07:19.366 ' 00:07:19.366 14:24:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.366 14:24:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.366 ************************************ 00:07:19.366 START TEST thread_poller_perf 00:07:19.366 ************************************ 00:07:19.366 14:24:20 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.366 [2024-11-20 14:24:20.312128] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:19.366 [2024-11-20 14:24:20.312479] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:07:19.624 [2024-11-20 14:24:20.503035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.624 [2024-11-20 14:24:20.667699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.624 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:21.028 [2024-11-20T14:24:22.085Z] ====================================== 00:07:21.028 [2024-11-20T14:24:22.085Z] busy:2217187846 (cyc) 00:07:21.028 [2024-11-20T14:24:22.085Z] total_run_count: 286000 00:07:21.028 [2024-11-20T14:24:22.085Z] tsc_hz: 2200000000 (cyc) 00:07:21.028 [2024-11-20T14:24:22.085Z] ====================================== 00:07:21.028 [2024-11-20T14:24:22.085Z] poller_cost: 7752 (cyc), 3523 (nsec) 00:07:21.028 00:07:21.028 real 0m1.672s 00:07:21.028 user 0m1.433s 00:07:21.028 sys 0m0.127s 00:07:21.028 14:24:21 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.028 14:24:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.028 ************************************ 00:07:21.028 END TEST thread_poller_perf 00:07:21.028 ************************************ 00:07:21.028 14:24:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.028 14:24:21 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:21.028 14:24:21 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.028 14:24:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.028 ************************************ 00:07:21.028 START TEST thread_poller_perf 00:07:21.028 ************************************ 00:07:21.028 14:24:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.028 [2024-11-20 14:24:22.048593] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:21.028 [2024-11-20 14:24:22.048780] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59754 ] 00:07:21.287 [2024-11-20 14:24:22.222792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.545 [2024-11-20 14:24:22.363765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.545 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:22.921 [2024-11-20T14:24:23.978Z] ====================================== 00:07:22.921 [2024-11-20T14:24:23.978Z] busy:2203884278 (cyc) 00:07:22.921 [2024-11-20T14:24:23.978Z] total_run_count: 3606000 00:07:22.921 [2024-11-20T14:24:23.978Z] tsc_hz: 2200000000 (cyc) 00:07:22.921 [2024-11-20T14:24:23.978Z] ====================================== 00:07:22.921 [2024-11-20T14:24:23.978Z] poller_cost: 611 (cyc), 277 (nsec) 00:07:22.921 00:07:22.921 real 0m1.596s 00:07:22.921 user 0m1.389s 00:07:22.921 sys 0m0.096s 00:07:22.921 14:24:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.921 14:24:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.921 ************************************ 00:07:22.921 END TEST thread_poller_perf 00:07:22.921 ************************************ 00:07:22.921 14:24:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:22.921 ************************************ 00:07:22.921 END TEST thread 00:07:22.921 ************************************ 00:07:22.921 00:07:22.921 real 0m3.552s 00:07:22.921 user 0m2.955s 00:07:22.921 sys 0m0.369s 00:07:22.921 14:24:23 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.921 14:24:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.921 14:24:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:22.921 14:24:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.921 14:24:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.921 14:24:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.921 14:24:23 -- common/autotest_common.sh@10 -- # set +x 00:07:22.921 ************************************ 00:07:22.921 START TEST app_cmdline 00:07:22.921 ************************************ 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.921 * Looking for test storage... 00:07:22.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:22.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.921 14:24:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.921 --rc genhtml_branch_coverage=1 00:07:22.921 --rc genhtml_function_coverage=1 00:07:22.921 --rc genhtml_legend=1 00:07:22.921 --rc geninfo_all_blocks=1 00:07:22.921 --rc geninfo_unexecuted_blocks=1 00:07:22.921 00:07:22.921 ' 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.921 --rc genhtml_branch_coverage=1 00:07:22.921 --rc genhtml_function_coverage=1 00:07:22.921 --rc genhtml_legend=1 00:07:22.921 --rc geninfo_all_blocks=1 00:07:22.921 --rc geninfo_unexecuted_blocks=1 00:07:22.921 00:07:22.921 ' 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.921 --rc genhtml_branch_coverage=1 00:07:22.921 --rc genhtml_function_coverage=1 00:07:22.921 --rc genhtml_legend=1 00:07:22.921 --rc geninfo_all_blocks=1 00:07:22.921 --rc geninfo_unexecuted_blocks=1 00:07:22.921 00:07:22.921 ' 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.921 --rc genhtml_branch_coverage=1 00:07:22.921 --rc genhtml_function_coverage=1 00:07:22.921 --rc genhtml_legend=1 00:07:22.921 --rc geninfo_all_blocks=1 00:07:22.921 --rc geninfo_unexecuted_blocks=1 00:07:22.921 00:07:22.921 ' 00:07:22.921 14:24:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.921 14:24:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59843 00:07:22.921 14:24:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59843 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59843 ']' 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.921 14:24:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.921 14:24:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.179 [2024-11-20 14:24:24.019333] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:23.179 [2024-11-20 14:24:24.019492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59843 ] 00:07:23.179 [2024-11-20 14:24:24.202487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.438 [2024-11-20 14:24:24.338795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.373 14:24:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.373 14:24:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:24.373 14:24:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:24.631 { 00:07:24.631 "version": "SPDK v25.01-pre git sha1 23429eed7", 00:07:24.631 "fields": { 00:07:24.631 "major": 25, 00:07:24.631 "minor": 1, 00:07:24.631 "patch": 0, 00:07:24.631 "suffix": "-pre", 00:07:24.631 "commit": "23429eed7" 00:07:24.631 } 00:07:24.631 } 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.631 14:24:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.631 14:24:25 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.891 request: 00:07:24.891 { 00:07:24.891 "method": "env_dpdk_get_mem_stats", 00:07:24.891 "req_id": 1 00:07:24.891 } 00:07:24.891 Got JSON-RPC error response 00:07:24.891 response: 00:07:24.891 { 00:07:24.891 "code": -32601, 00:07:24.891 "message": "Method not found" 00:07:24.891 } 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.891 14:24:25 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59843 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59843 ']' 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59843 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.891 14:24:25 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59843 00:07:25.149 14:24:25 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.149 14:24:25 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.149 killing process with pid 59843 00:07:25.149 14:24:25 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59843' 00:07:25.149 14:24:25 app_cmdline -- common/autotest_common.sh@973 -- # kill 59843 00:07:25.149 14:24:25 app_cmdline -- common/autotest_common.sh@978 -- # wait 59843 00:07:27.679 00:07:27.679 real 0m4.543s 00:07:27.679 user 0m4.952s 00:07:27.679 sys 0m0.719s 00:07:27.679 14:24:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.679 14:24:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.679 ************************************ 00:07:27.679 END TEST app_cmdline 00:07:27.679 ************************************ 00:07:27.679 14:24:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.679 14:24:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.679 14:24:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.679 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.679 ************************************ 00:07:27.679 START TEST version 00:07:27.680 ************************************ 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.680 * Looking for test storage... 00:07:27.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.680 14:24:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.680 14:24:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.680 14:24:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.680 14:24:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.680 14:24:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.680 14:24:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.680 14:24:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.680 14:24:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.680 14:24:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.680 14:24:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.680 14:24:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.680 14:24:28 version -- scripts/common.sh@344 -- # case "$op" in 00:07:27.680 14:24:28 version -- scripts/common.sh@345 -- # : 1 00:07:27.680 14:24:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.680 14:24:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.680 14:24:28 version -- scripts/common.sh@365 -- # decimal 1 00:07:27.680 14:24:28 version -- scripts/common.sh@353 -- # local d=1 00:07:27.680 14:24:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.680 14:24:28 version -- scripts/common.sh@355 -- # echo 1 00:07:27.680 14:24:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.680 14:24:28 version -- scripts/common.sh@366 -- # decimal 2 00:07:27.680 14:24:28 version -- scripts/common.sh@353 -- # local d=2 00:07:27.680 14:24:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.680 14:24:28 version -- scripts/common.sh@355 -- # echo 2 00:07:27.680 14:24:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.680 14:24:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.680 14:24:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.680 14:24:28 version -- scripts/common.sh@368 -- # return 0 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.680 --rc genhtml_branch_coverage=1 00:07:27.680 --rc genhtml_function_coverage=1 00:07:27.680 --rc genhtml_legend=1 00:07:27.680 --rc geninfo_all_blocks=1 00:07:27.680 --rc geninfo_unexecuted_blocks=1 00:07:27.680 00:07:27.680 ' 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.680 --rc genhtml_branch_coverage=1 00:07:27.680 --rc genhtml_function_coverage=1 00:07:27.680 --rc genhtml_legend=1 00:07:27.680 --rc geninfo_all_blocks=1 00:07:27.680 --rc geninfo_unexecuted_blocks=1 00:07:27.680 00:07:27.680 ' 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.680 --rc genhtml_branch_coverage=1 00:07:27.680 --rc genhtml_function_coverage=1 00:07:27.680 --rc genhtml_legend=1 00:07:27.680 --rc geninfo_all_blocks=1 00:07:27.680 --rc geninfo_unexecuted_blocks=1 00:07:27.680 00:07:27.680 ' 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.680 --rc genhtml_branch_coverage=1 00:07:27.680 --rc genhtml_function_coverage=1 00:07:27.680 --rc genhtml_legend=1 00:07:27.680 --rc geninfo_all_blocks=1 00:07:27.680 --rc geninfo_unexecuted_blocks=1 00:07:27.680 00:07:27.680 ' 00:07:27.680 14:24:28 version -- app/version.sh@17 -- # get_header_version major 00:07:27.680 14:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.680 14:24:28 version -- app/version.sh@17 -- # major=25 00:07:27.680 14:24:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:27.680 14:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:27.680 14:24:28 version -- app/version.sh@18 -- # minor=1 00:07:27.680 14:24:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:27.680 14:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.680 14:24:28 version -- app/version.sh@19 -- # patch=0 00:07:27.680 14:24:28 version -- app/version.sh@20 -- # get_header_version suffix 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # cut -f2 00:07:27.680 14:24:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.680 14:24:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.680 14:24:28 version -- app/version.sh@20 -- # suffix=-pre 00:07:27.680 14:24:28 version -- app/version.sh@22 -- # version=25.1 00:07:27.680 14:24:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:27.680 14:24:28 version -- app/version.sh@28 -- # version=25.1rc0 00:07:27.680 14:24:28 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:27.680 14:24:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:27.680 14:24:28 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:27.680 14:24:28 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:27.680 00:07:27.680 real 0m0.273s 00:07:27.680 user 0m0.158s 00:07:27.680 sys 0m0.143s 00:07:27.680 14:24:28 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.680 14:24:28 version -- common/autotest_common.sh@10 -- # set +x 00:07:27.680 ************************************ 00:07:27.680 END TEST version 00:07:27.680 ************************************ 00:07:27.680 14:24:28 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:27.680 14:24:28 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:27.680 14:24:28 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:27.680 14:24:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.680 14:24:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.680 14:24:28 -- common/autotest_common.sh@10 -- # set +x 00:07:27.680 ************************************ 00:07:27.680 START TEST bdev_raid 00:07:27.680 ************************************ 00:07:27.680 14:24:28 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:27.680 * Looking for test storage... 00:07:27.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:27.680 14:24:28 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.680 14:24:28 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.680 14:24:28 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.940 14:24:28 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.940 --rc genhtml_branch_coverage=1 00:07:27.940 --rc genhtml_function_coverage=1 00:07:27.940 --rc genhtml_legend=1 00:07:27.940 --rc geninfo_all_blocks=1 00:07:27.940 --rc geninfo_unexecuted_blocks=1 00:07:27.940 00:07:27.940 ' 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.940 --rc genhtml_branch_coverage=1 00:07:27.940 --rc genhtml_function_coverage=1 00:07:27.940 --rc genhtml_legend=1 00:07:27.940 --rc geninfo_all_blocks=1 00:07:27.940 --rc geninfo_unexecuted_blocks=1 00:07:27.940 00:07:27.940 ' 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.940 --rc genhtml_branch_coverage=1 00:07:27.940 --rc genhtml_function_coverage=1 00:07:27.940 --rc genhtml_legend=1 00:07:27.940 --rc geninfo_all_blocks=1 00:07:27.940 --rc geninfo_unexecuted_blocks=1 00:07:27.940 00:07:27.940 ' 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.940 --rc genhtml_branch_coverage=1 00:07:27.940 --rc genhtml_function_coverage=1 00:07:27.940 --rc genhtml_legend=1 00:07:27.940 --rc geninfo_all_blocks=1 00:07:27.940 --rc geninfo_unexecuted_blocks=1 00:07:27.940 00:07:27.940 ' 00:07:27.940 14:24:28 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:27.940 14:24:28 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:27.940 14:24:28 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:27.940 14:24:28 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:27.940 14:24:28 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:27.940 14:24:28 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:27.940 14:24:28 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.940 14:24:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.940 ************************************ 00:07:27.940 START TEST raid1_resize_data_offset_test 00:07:27.940 ************************************ 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60038 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60038' 00:07:27.940 Process raid pid: 60038 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60038 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60038 ']' 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.940 14:24:28 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.940 [2024-11-20 14:24:28.940763] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:27.940 [2024-11-20 14:24:28.940960] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.199 [2024-11-20 14:24:29.131335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.460 [2024-11-20 14:24:29.270861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.460 [2024-11-20 14:24:29.482034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.460 [2024-11-20 14:24:29.482084] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.064 14:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.064 14:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:29.064 14:24:29 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:29.064 14:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.064 14:24:29 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.064 malloc0 00:07:29.064 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.064 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:29.064 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.064 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.323 malloc1 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.323 null0 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.323 [2024-11-20 14:24:30.156777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:29.323 [2024-11-20 14:24:30.159208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:29.323 [2024-11-20 14:24:30.159287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:29.323 [2024-11-20 14:24:30.159508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:29.323 [2024-11-20 14:24:30.159533] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:29.323 [2024-11-20 14:24:30.159870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:29.323 [2024-11-20 14:24:30.160099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:29.323 [2024-11-20 14:24:30.160121] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:29.323 [2024-11-20 14:24:30.160304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.323 [2024-11-20 14:24:30.216823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.323 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.891 malloc2 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.891 [2024-11-20 14:24:30.767611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:29.891 [2024-11-20 14:24:30.784856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.891 [2024-11-20 14:24:30.787308] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60038 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60038 ']' 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60038 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60038 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.891 killing process with pid 60038 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60038' 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60038 00:07:29.891 14:24:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60038 00:07:29.891 [2024-11-20 14:24:30.876377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.891 [2024-11-20 14:24:30.878155] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:29.891 [2024-11-20 14:24:30.878228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.891 [2024-11-20 14:24:30.878255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:29.891 [2024-11-20 14:24:30.909883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.891 [2024-11-20 14:24:30.910306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.891 [2024-11-20 14:24:30.910342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:31.792 [2024-11-20 14:24:32.569916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.728 14:24:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:32.728 00:07:32.728 real 0m4.842s 00:07:32.728 user 0m4.808s 00:07:32.728 sys 0m0.672s 00:07:32.728 14:24:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.728 ************************************ 00:07:32.728 14:24:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.728 END TEST raid1_resize_data_offset_test 00:07:32.728 ************************************ 00:07:32.728 14:24:33 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:32.728 14:24:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.728 14:24:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.728 14:24:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.728 ************************************ 00:07:32.728 START TEST raid0_resize_superblock_test 00:07:32.728 ************************************ 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60123 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60123' 00:07:32.728 Process raid pid: 60123 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60123 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60123 ']' 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.728 14:24:33 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.986 [2024-11-20 14:24:33.817271] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:32.986 [2024-11-20 14:24:33.817430] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.986 [2024-11-20 14:24:34.000592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.245 [2024-11-20 14:24:34.180115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.504 [2024-11-20 14:24:34.388794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.504 [2024-11-20 14:24:34.388856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.068 14:24:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.068 14:24:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.068 14:24:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:34.068 14:24:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.068 14:24:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.633 malloc0 00:07:34.633 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.633 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:34.633 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.633 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.633 [2024-11-20 14:24:35.421973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:34.633 [2024-11-20 14:24:35.422044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.633 [2024-11-20 14:24:35.422084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:34.633 [2024-11-20 14:24:35.422105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.633 [2024-11-20 14:24:35.424914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.633 [2024-11-20 14:24:35.424960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:34.633 pt0 00:07:34.633 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.633 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:34.633 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.634 e8c7adc4-3bfd-49a1-a67d-11a39d4b5b2f 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.634 adb8892c-6fce-47a4-bae7-bdc0a66efdd7 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.634 ddebbff7-234f-4a23-8b54-1fc7b3c096ac 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.634 [2024-11-20 14:24:35.568657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev adb8892c-6fce-47a4-bae7-bdc0a66efdd7 is claimed 00:07:34.634 [2024-11-20 14:24:35.568771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ddebbff7-234f-4a23-8b54-1fc7b3c096ac is claimed 00:07:34.634 [2024-11-20 14:24:35.568960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:34.634 [2024-11-20 14:24:35.568986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:34.634 [2024-11-20 14:24:35.569332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:34.634 [2024-11-20 14:24:35.569608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:34.634 [2024-11-20 14:24:35.569648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:34.634 [2024-11-20 14:24:35.569840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.634 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.634 [2024-11-20 14:24:35.684968] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.893 [2024-11-20 14:24:35.720996] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.893 [2024-11-20 14:24:35.721036] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'adb8892c-6fce-47a4-bae7-bdc0a66efdd7' was resized: old size 131072, new size 204800 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.893 [2024-11-20 14:24:35.728833] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:34.893 [2024-11-20 14:24:35.728865] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ddebbff7-234f-4a23-8b54-1fc7b3c096ac' was resized: old size 131072, new size 204800 00:07:34.893 [2024-11-20 14:24:35.728896] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.893 [2024-11-20 14:24:35.837061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.893 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.893 [2024-11-20 14:24:35.888850] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:34.893 [2024-11-20 14:24:35.888952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:34.893 [2024-11-20 14:24:35.888976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.893 [2024-11-20 14:24:35.888997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:34.893 [2024-11-20 14:24:35.889148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.893 [2024-11-20 14:24:35.889206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.894 [2024-11-20 14:24:35.889226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.894 [2024-11-20 14:24:35.896682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:34.894 [2024-11-20 14:24:35.896754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.894 [2024-11-20 14:24:35.896785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:34.894 [2024-11-20 14:24:35.896803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.894 [2024-11-20 14:24:35.899835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.894 [2024-11-20 14:24:35.899883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:34.894 pt0 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:34.894 [2024-11-20 14:24:35.902321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev adb8892c-6fce-47a4-bae7-bdc0a66efdd7 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.894 [2024-11-20 14:24:35.902407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev adb8892c-6fce-47a4-bae7-bdc0a66efdd7 is claimed 00:07:34.894 [2024-11-20 14:24:35.902545] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ddebbff7-234f-4a23-8b54-1fc7b3c096ac 00:07:34.894 [2024-11-20 14:24:35.902579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ddebbff7-234f-4a23-8b54-1fc7b3c096ac is claimed 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.894 [2024-11-20 14:24:35.902766] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ddebbff7-234f-4a23-8b54-1fc7b3c096ac (2) smaller than existing raid bdev Raid (3) 00:07:34.894 [2024-11-20 14:24:35.902809] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev adb8892c-6fce-47a4-bae7-bdc0a66efdd7: File exists 00:07:34.894 [2024-11-20 14:24:35.902860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:34.894 [2024-11-20 14:24:35.902879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:34.894 [2024-11-20 14:24:35.903204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:34.894 [2024-11-20 14:24:35.903408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:34.894 [2024-11-20 14:24:35.903425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:34.894 [2024-11-20 14:24:35.903614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.894 [2024-11-20 14:24:35.916989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.894 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60123 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60123 ']' 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60123 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60123 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.152 killing process with pid 60123 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60123' 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60123 00:07:35.152 [2024-11-20 14:24:35.998380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.152 14:24:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60123 00:07:35.152 [2024-11-20 14:24:35.998498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.152 [2024-11-20 14:24:35.998566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.152 [2024-11-20 14:24:35.998581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:36.527 [2024-11-20 14:24:37.322424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.462 14:24:38 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:37.462 00:07:37.462 real 0m4.718s 00:07:37.463 user 0m5.064s 00:07:37.463 sys 0m0.645s 00:07:37.463 14:24:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.463 ************************************ 00:07:37.463 END TEST raid0_resize_superblock_test 00:07:37.463 ************************************ 00:07:37.463 14:24:38 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 14:24:38 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:37.463 14:24:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.463 14:24:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.463 14:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:37.463 ************************************ 00:07:37.463 START TEST raid1_resize_superblock_test 00:07:37.463 ************************************ 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60221 00:07:37.463 Process raid pid: 60221 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60221' 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60221 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60221 ']' 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.463 14:24:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.721 [2024-11-20 14:24:38.585562] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:37.721 [2024-11-20 14:24:38.585727] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.721 [2024-11-20 14:24:38.762460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.979 [2024-11-20 14:24:38.894972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.237 [2024-11-20 14:24:39.101933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.237 [2024-11-20 14:24:39.101991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.804 14:24:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.804 14:24:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:38.804 14:24:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:38.804 14:24:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.804 14:24:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.371 malloc0 00:07:39.371 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.371 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:39.371 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.371 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.371 [2024-11-20 14:24:40.192424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:39.371 [2024-11-20 14:24:40.192493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.371 [2024-11-20 14:24:40.192528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:39.371 [2024-11-20 14:24:40.192547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.371 [2024-11-20 14:24:40.195359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.371 [2024-11-20 14:24:40.195404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:39.371 pt0 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 f7e68ff0-ad30-4996-bfa0-05763d394757 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 97796cc9-ecfa-4bdf-aa36-9a6e248a19dd 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 2dd10557-b043-4bff-b119-0c1348a13ee1 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 [2024-11-20 14:24:40.340415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 97796cc9-ecfa-4bdf-aa36-9a6e248a19dd is claimed 00:07:39.372 [2024-11-20 14:24:40.340547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2dd10557-b043-4bff-b119-0c1348a13ee1 is claimed 00:07:39.372 [2024-11-20 14:24:40.340780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:39.372 [2024-11-20 14:24:40.340818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:39.372 [2024-11-20 14:24:40.341190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.372 [2024-11-20 14:24:40.341472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:39.372 [2024-11-20 14:24:40.341500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:39.372 [2024-11-20 14:24:40.341725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.372 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.630 [2024-11-20 14:24:40.452741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:39.630 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 [2024-11-20 14:24:40.500741] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:39.631 [2024-11-20 14:24:40.500780] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '97796cc9-ecfa-4bdf-aa36-9a6e248a19dd' was resized: old size 131072, new size 204800 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 [2024-11-20 14:24:40.508607] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:39.631 [2024-11-20 14:24:40.508668] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2dd10557-b043-4bff-b119-0c1348a13ee1' was resized: old size 131072, new size 204800 00:07:39.631 [2024-11-20 14:24:40.508733] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:39.631 [2024-11-20 14:24:40.632745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 [2024-11-20 14:24:40.680490] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:39.631 [2024-11-20 14:24:40.680582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:39.631 [2024-11-20 14:24:40.680641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:39.631 [2024-11-20 14:24:40.680851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.631 [2024-11-20 14:24:40.681130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.631 [2024-11-20 14:24:40.681226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.631 [2024-11-20 14:24:40.681250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.889 [2024-11-20 14:24:40.688403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:39.889 [2024-11-20 14:24:40.688457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.889 [2024-11-20 14:24:40.688486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:39.889 [2024-11-20 14:24:40.688506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.889 [2024-11-20 14:24:40.691681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.889 [2024-11-20 14:24:40.691737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:39.889 pt0 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.889 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.889 [2024-11-20 14:24:40.693985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 97796cc9-ecfa-4bdf-aa36-9a6e248a19dd 00:07:39.889 [2024-11-20 14:24:40.694067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 97796cc9-ecfa-4bdf-aa36-9a6e248a19dd is claimed 00:07:39.889 [2024-11-20 14:24:40.694202] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2dd10557-b043-4bff-b119-0c1348a13ee1 00:07:39.889 [2024-11-20 14:24:40.694234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2dd10557-b043-4bff-b119-0c1348a13ee1 is claimed 00:07:39.889 [2024-11-20 14:24:40.694381] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2dd10557-b043-4bff-b119-0c1348a13ee1 (2) smaller than existing raid bdev Raid (3) 00:07:39.889 [2024-11-20 14:24:40.694414] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 97796cc9-ecfa-4bdf-aa36-9a6e248a19dd: File exists 00:07:39.889 [2024-11-20 14:24:40.694466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:39.889 [2024-11-20 14:24:40.694492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:39.889 [2024-11-20 14:24:40.694834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:39.889 [2024-11-20 14:24:40.695047] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:39.890 [2024-11-20 14:24:40.695064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:39.890 [2024-11-20 14:24:40.695251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.890 [2024-11-20 14:24:40.708734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60221 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60221 ']' 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60221 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60221 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.890 killing process with pid 60221 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60221' 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60221 00:07:39.890 [2024-11-20 14:24:40.791405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.890 [2024-11-20 14:24:40.791486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.890 14:24:40 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60221 00:07:39.890 [2024-11-20 14:24:40.791556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.890 [2024-11-20 14:24:40.791572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:41.265 [2024-11-20 14:24:42.136568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.200 14:24:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:42.200 00:07:42.200 real 0m4.740s 00:07:42.200 user 0m5.086s 00:07:42.200 sys 0m0.660s 00:07:42.200 14:24:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.200 ************************************ 00:07:42.200 14:24:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.200 END TEST raid1_resize_superblock_test 00:07:42.200 ************************************ 00:07:42.458 14:24:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:42.458 14:24:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:42.458 14:24:43 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:42.458 14:24:43 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:42.458 14:24:43 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:42.458 14:24:43 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:42.458 14:24:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.458 14:24:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.458 14:24:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.458 ************************************ 00:07:42.458 START TEST raid_function_test_raid0 00:07:42.458 ************************************ 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60324 00:07:42.458 Process raid pid: 60324 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60324' 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60324 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60324 ']' 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.458 14:24:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:42.458 [2024-11-20 14:24:43.415003] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:42.458 [2024-11-20 14:24:43.415185] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.716 [2024-11-20 14:24:43.596567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.716 [2024-11-20 14:24:43.732435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.974 [2024-11-20 14:24:43.944171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.974 [2024-11-20 14:24:43.944233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:43.540 Base_1 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:43.540 Base_2 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.540 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:43.540 [2024-11-20 14:24:44.488265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:43.541 [2024-11-20 14:24:44.491114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:43.541 [2024-11-20 14:24:44.491268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:43.541 [2024-11-20 14:24:44.491289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:43.541 [2024-11-20 14:24:44.491705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:43.541 [2024-11-20 14:24:44.491952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:43.541 [2024-11-20 14:24:44.491978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:43.541 [2024-11-20 14:24:44.492305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:43.541 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:44.107 [2024-11-20 14:24:44.860429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:44.107 /dev/nbd0 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.107 1+0 records in 00:07:44.107 1+0 records out 00:07:44.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328587 s, 12.5 MB/s 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:44.107 14:24:44 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:44.412 { 00:07:44.412 "nbd_device": "/dev/nbd0", 00:07:44.412 "bdev_name": "raid" 00:07:44.412 } 00:07:44.412 ]' 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:44.412 { 00:07:44.412 "nbd_device": "/dev/nbd0", 00:07:44.412 "bdev_name": "raid" 00:07:44.412 } 00:07:44.412 ]' 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:44.412 4096+0 records in 00:07:44.412 4096+0 records out 00:07:44.412 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0293486 s, 71.5 MB/s 00:07:44.412 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:44.670 4096+0 records in 00:07:44.670 4096+0 records out 00:07:44.670 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.337249 s, 6.2 MB/s 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:44.670 128+0 records in 00:07:44.670 128+0 records out 00:07:44.670 65536 bytes (66 kB, 64 KiB) copied, 0.00104464 s, 62.7 MB/s 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:44.670 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:44.929 2035+0 records in 00:07:44.929 2035+0 records out 00:07:44.929 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00930245 s, 112 MB/s 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:44.929 456+0 records in 00:07:44.929 456+0 records out 00:07:44.929 233472 bytes (233 kB, 228 KiB) copied, 0.00343414 s, 68.0 MB/s 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.929 14:24:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:45.188 [2024-11-20 14:24:46.135417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:45.188 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60324 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60324 ']' 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60324 00:07:45.446 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:45.705 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.705 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60324 00:07:45.705 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.705 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.705 killing process with pid 60324 00:07:45.705 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60324' 00:07:45.705 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60324 00:07:45.705 [2024-11-20 14:24:46.528139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.705 14:24:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60324 00:07:45.705 [2024-11-20 14:24:46.528297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.705 [2024-11-20 14:24:46.528372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.705 [2024-11-20 14:24:46.528407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:45.705 [2024-11-20 14:24:46.712951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.081 14:24:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:47.081 00:07:47.081 real 0m4.516s 00:07:47.081 user 0m5.532s 00:07:47.081 sys 0m1.120s 00:07:47.081 14:24:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.081 14:24:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:47.081 ************************************ 00:07:47.081 END TEST raid_function_test_raid0 00:07:47.081 ************************************ 00:07:47.081 14:24:47 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:47.081 14:24:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.081 14:24:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.081 14:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.081 ************************************ 00:07:47.081 START TEST raid_function_test_concat 00:07:47.081 ************************************ 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60462 00:07:47.081 Process raid pid: 60462 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60462' 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60462 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60462 ']' 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.081 14:24:47 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:47.081 [2024-11-20 14:24:47.976663] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:47.081 [2024-11-20 14:24:47.976858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.339 [2024-11-20 14:24:48.169322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.339 [2024-11-20 14:24:48.327965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.597 [2024-11-20 14:24:48.552367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.597 [2024-11-20 14:24:48.552429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.219 14:24:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.219 14:24:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:48.219 14:24:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:48.219 14:24:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.219 14:24:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:48.219 Base_1 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:48.219 Base_2 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:48.219 [2024-11-20 14:24:49.080216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:48.219 [2024-11-20 14:24:49.082614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:48.219 [2024-11-20 14:24:49.082732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:48.219 [2024-11-20 14:24:49.082753] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:48.219 [2024-11-20 14:24:49.083079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:48.219 [2024-11-20 14:24:49.083282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:48.219 [2024-11-20 14:24:49.083304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:48.219 [2024-11-20 14:24:49.083481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:48.219 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:48.477 [2024-11-20 14:24:49.404362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:48.477 /dev/nbd0 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.477 1+0 records in 00:07:48.477 1+0 records out 00:07:48.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339954 s, 12.0 MB/s 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:48.477 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:48.735 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:48.735 { 00:07:48.735 "nbd_device": "/dev/nbd0", 00:07:48.735 "bdev_name": "raid" 00:07:48.735 } 00:07:48.735 ]' 00:07:48.735 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.735 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:48.735 { 00:07:48.735 "nbd_device": "/dev/nbd0", 00:07:48.735 "bdev_name": "raid" 00:07:48.735 } 00:07:48.735 ]' 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:48.994 4096+0 records in 00:07:48.994 4096+0 records out 00:07:48.994 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0319923 s, 65.6 MB/s 00:07:48.994 14:24:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:49.252 4096+0 records in 00:07:49.252 4096+0 records out 00:07:49.252 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.331321 s, 6.3 MB/s 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:49.252 128+0 records in 00:07:49.252 128+0 records out 00:07:49.252 65536 bytes (66 kB, 64 KiB) copied, 0.000908659 s, 72.1 MB/s 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:49.252 2035+0 records in 00:07:49.252 2035+0 records out 00:07:49.252 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0123771 s, 84.2 MB/s 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:49.252 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:49.511 456+0 records in 00:07:49.511 456+0 records out 00:07:49.511 233472 bytes (233 kB, 228 KiB) copied, 0.00320843 s, 72.8 MB/s 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.511 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:49.770 [2024-11-20 14:24:50.635702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:49.770 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:50.029 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:50.029 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:50.029 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.029 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:50.029 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:50.029 14:24:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60462 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60462 ']' 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60462 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60462 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.029 killing process with pid 60462 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60462' 00:07:50.029 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60462 00:07:50.030 [2024-11-20 14:24:51.040663] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.030 14:24:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60462 00:07:50.030 [2024-11-20 14:24:51.040796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.030 [2024-11-20 14:24:51.040868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.030 [2024-11-20 14:24:51.040887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:50.288 [2024-11-20 14:24:51.222421] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.222 14:24:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:51.222 00:07:51.222 real 0m4.407s 00:07:51.222 user 0m5.418s 00:07:51.222 sys 0m1.063s 00:07:51.222 14:24:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.222 ************************************ 00:07:51.222 END TEST raid_function_test_concat 00:07:51.222 ************************************ 00:07:51.222 14:24:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:51.481 14:24:52 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:51.481 14:24:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.481 14:24:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.481 14:24:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.481 ************************************ 00:07:51.481 START TEST raid0_resize_test 00:07:51.481 ************************************ 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60592 00:07:51.481 Process raid pid: 60592 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60592' 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60592 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60592 ']' 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.481 14:24:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.482 14:24:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.482 14:24:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.482 14:24:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.482 [2024-11-20 14:24:52.434520] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:51.482 [2024-11-20 14:24:52.434729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.740 [2024-11-20 14:24:52.622015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.740 [2024-11-20 14:24:52.756021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.999 [2024-11-20 14:24:52.962820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.999 [2024-11-20 14:24:52.962882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.563 Base_1 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.563 Base_2 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.563 [2024-11-20 14:24:53.430894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:52.563 [2024-11-20 14:24:53.433292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:52.563 [2024-11-20 14:24:53.433368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:52.563 [2024-11-20 14:24:53.433389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:52.563 [2024-11-20 14:24:53.433727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:52.563 [2024-11-20 14:24:53.433888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:52.563 [2024-11-20 14:24:53.433903] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:52.563 [2024-11-20 14:24:53.434069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.563 [2024-11-20 14:24:53.438882] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:52.563 [2024-11-20 14:24:53.438920] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:52.563 true 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:52.563 [2024-11-20 14:24:53.451091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.563 [2024-11-20 14:24:53.498874] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:52.563 [2024-11-20 14:24:53.498905] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:52.563 [2024-11-20 14:24:53.498942] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:52.563 true 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:52.563 [2024-11-20 14:24:53.511074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60592 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60592 ']' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60592 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60592 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.563 killing process with pid 60592 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60592' 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60592 00:07:52.563 [2024-11-20 14:24:53.591173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.563 14:24:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60592 00:07:52.563 [2024-11-20 14:24:53.591281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.563 [2024-11-20 14:24:53.591343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.563 [2024-11-20 14:24:53.591357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:52.563 [2024-11-20 14:24:53.606735] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.938 14:24:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:53.938 00:07:53.938 real 0m2.371s 00:07:53.938 user 0m2.617s 00:07:53.938 sys 0m0.378s 00:07:53.938 14:24:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.938 14:24:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.938 ************************************ 00:07:53.938 END TEST raid0_resize_test 00:07:53.938 ************************************ 00:07:53.938 14:24:54 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:53.938 14:24:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.938 14:24:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.938 14:24:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.938 ************************************ 00:07:53.938 START TEST raid1_resize_test 00:07:53.938 ************************************ 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60648 00:07:53.938 Process raid pid: 60648 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60648' 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60648 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60648 ']' 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.938 14:24:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.938 [2024-11-20 14:24:54.863646] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:53.938 [2024-11-20 14:24:54.863832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.195 [2024-11-20 14:24:55.046247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.195 [2024-11-20 14:24:55.180913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.452 [2024-11-20 14:24:55.388446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.452 [2024-11-20 14:24:55.388508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 Base_1 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 Base_2 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 [2024-11-20 14:24:55.828951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:55.016 [2024-11-20 14:24:55.831371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:55.016 [2024-11-20 14:24:55.831453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:55.016 [2024-11-20 14:24:55.831472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:55.016 [2024-11-20 14:24:55.831810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:55.016 [2024-11-20 14:24:55.831981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:55.016 [2024-11-20 14:24:55.831996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:55.016 [2024-11-20 14:24:55.832165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 [2024-11-20 14:24:55.836954] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:55.016 [2024-11-20 14:24:55.836996] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:55.016 true 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:55.016 [2024-11-20 14:24:55.849140] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 [2024-11-20 14:24:55.900968] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:55.016 [2024-11-20 14:24:55.901002] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:55.016 [2024-11-20 14:24:55.901044] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:55.016 true 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 [2024-11-20 14:24:55.913150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60648 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60648 ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60648 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60648 00:07:55.016 killing process with pid 60648 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60648' 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60648 00:07:55.016 [2024-11-20 14:24:55.992782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.016 [2024-11-20 14:24:55.992893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.016 14:24:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60648 00:07:55.016 [2024-11-20 14:24:55.993524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.016 [2024-11-20 14:24:55.993555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:55.016 [2024-11-20 14:24:56.008558] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.418 14:24:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:56.418 00:07:56.418 real 0m2.295s 00:07:56.418 user 0m2.565s 00:07:56.418 sys 0m0.340s 00:07:56.418 14:24:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.418 14:24:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.418 ************************************ 00:07:56.418 END TEST raid1_resize_test 00:07:56.418 ************************************ 00:07:56.418 14:24:57 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:56.418 14:24:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:56.418 14:24:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:56.418 14:24:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.418 14:24:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.418 14:24:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.418 ************************************ 00:07:56.418 START TEST raid_state_function_test 00:07:56.418 ************************************ 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:56.418 Process raid pid: 60711 00:07:56.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60711 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60711' 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60711 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60711 ']' 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.418 14:24:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.418 [2024-11-20 14:24:57.209871] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:07:56.418 [2024-11-20 14:24:57.210295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.418 [2024-11-20 14:24:57.396135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.675 [2024-11-20 14:24:57.529226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.932 [2024-11-20 14:24:57.738203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.932 [2024-11-20 14:24:57.738266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.189 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.189 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:57.189 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.189 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.189 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.455 [2024-11-20 14:24:58.246877] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.455 [2024-11-20 14:24:58.246954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.455 [2024-11-20 14:24:58.246972] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.455 [2024-11-20 14:24:58.246989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.455 "name": "Existed_Raid", 00:07:57.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.455 "strip_size_kb": 64, 00:07:57.455 "state": "configuring", 00:07:57.455 "raid_level": "raid0", 00:07:57.455 "superblock": false, 00:07:57.455 "num_base_bdevs": 2, 00:07:57.455 "num_base_bdevs_discovered": 0, 00:07:57.455 "num_base_bdevs_operational": 2, 00:07:57.455 "base_bdevs_list": [ 00:07:57.455 { 00:07:57.455 "name": "BaseBdev1", 00:07:57.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.455 "is_configured": false, 00:07:57.455 "data_offset": 0, 00:07:57.455 "data_size": 0 00:07:57.455 }, 00:07:57.455 { 00:07:57.455 "name": "BaseBdev2", 00:07:57.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.455 "is_configured": false, 00:07:57.455 "data_offset": 0, 00:07:57.455 "data_size": 0 00:07:57.455 } 00:07:57.455 ] 00:07:57.455 }' 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.455 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.715 [2024-11-20 14:24:58.718990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.715 [2024-11-20 14:24:58.719036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.715 [2024-11-20 14:24:58.726946] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.715 [2024-11-20 14:24:58.727001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.715 [2024-11-20 14:24:58.727017] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.715 [2024-11-20 14:24:58.727037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.715 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.973 [2024-11-20 14:24:58.772329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.973 BaseBdev1 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.973 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.973 [ 00:07:57.973 { 00:07:57.973 "name": "BaseBdev1", 00:07:57.973 "aliases": [ 00:07:57.973 "57357f45-4106-418e-9103-17f3a91d6502" 00:07:57.973 ], 00:07:57.973 "product_name": "Malloc disk", 00:07:57.973 "block_size": 512, 00:07:57.973 "num_blocks": 65536, 00:07:57.973 "uuid": "57357f45-4106-418e-9103-17f3a91d6502", 00:07:57.973 "assigned_rate_limits": { 00:07:57.973 "rw_ios_per_sec": 0, 00:07:57.973 "rw_mbytes_per_sec": 0, 00:07:57.973 "r_mbytes_per_sec": 0, 00:07:57.973 "w_mbytes_per_sec": 0 00:07:57.973 }, 00:07:57.973 "claimed": true, 00:07:57.973 "claim_type": "exclusive_write", 00:07:57.973 "zoned": false, 00:07:57.973 "supported_io_types": { 00:07:57.973 "read": true, 00:07:57.973 "write": true, 00:07:57.973 "unmap": true, 00:07:57.973 "flush": true, 00:07:57.973 "reset": true, 00:07:57.973 "nvme_admin": false, 00:07:57.973 "nvme_io": false, 00:07:57.973 "nvme_io_md": false, 00:07:57.973 "write_zeroes": true, 00:07:57.973 "zcopy": true, 00:07:57.973 "get_zone_info": false, 00:07:57.973 "zone_management": false, 00:07:57.973 "zone_append": false, 00:07:57.973 "compare": false, 00:07:57.973 "compare_and_write": false, 00:07:57.973 "abort": true, 00:07:57.973 "seek_hole": false, 00:07:57.973 "seek_data": false, 00:07:57.973 "copy": true, 00:07:57.973 "nvme_iov_md": false 00:07:57.973 }, 00:07:57.973 "memory_domains": [ 00:07:57.973 { 00:07:57.973 "dma_device_id": "system", 00:07:57.973 "dma_device_type": 1 00:07:57.973 }, 00:07:57.973 { 00:07:57.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.974 "dma_device_type": 2 00:07:57.974 } 00:07:57.974 ], 00:07:57.974 "driver_specific": {} 00:07:57.974 } 00:07:57.974 ] 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.974 "name": "Existed_Raid", 00:07:57.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.974 "strip_size_kb": 64, 00:07:57.974 "state": "configuring", 00:07:57.974 "raid_level": "raid0", 00:07:57.974 "superblock": false, 00:07:57.974 "num_base_bdevs": 2, 00:07:57.974 "num_base_bdevs_discovered": 1, 00:07:57.974 "num_base_bdevs_operational": 2, 00:07:57.974 "base_bdevs_list": [ 00:07:57.974 { 00:07:57.974 "name": "BaseBdev1", 00:07:57.974 "uuid": "57357f45-4106-418e-9103-17f3a91d6502", 00:07:57.974 "is_configured": true, 00:07:57.974 "data_offset": 0, 00:07:57.974 "data_size": 65536 00:07:57.974 }, 00:07:57.974 { 00:07:57.974 "name": "BaseBdev2", 00:07:57.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.974 "is_configured": false, 00:07:57.974 "data_offset": 0, 00:07:57.974 "data_size": 0 00:07:57.974 } 00:07:57.974 ] 00:07:57.974 }' 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.974 14:24:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.539 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.539 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.539 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.539 [2024-11-20 14:24:59.328544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.539 [2024-11-20 14:24:59.328611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:58.539 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.539 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.540 [2024-11-20 14:24:59.336571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.540 [2024-11-20 14:24:59.339220] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.540 [2024-11-20 14:24:59.339399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.540 "name": "Existed_Raid", 00:07:58.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.540 "strip_size_kb": 64, 00:07:58.540 "state": "configuring", 00:07:58.540 "raid_level": "raid0", 00:07:58.540 "superblock": false, 00:07:58.540 "num_base_bdevs": 2, 00:07:58.540 "num_base_bdevs_discovered": 1, 00:07:58.540 "num_base_bdevs_operational": 2, 00:07:58.540 "base_bdevs_list": [ 00:07:58.540 { 00:07:58.540 "name": "BaseBdev1", 00:07:58.540 "uuid": "57357f45-4106-418e-9103-17f3a91d6502", 00:07:58.540 "is_configured": true, 00:07:58.540 "data_offset": 0, 00:07:58.540 "data_size": 65536 00:07:58.540 }, 00:07:58.540 { 00:07:58.540 "name": "BaseBdev2", 00:07:58.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.540 "is_configured": false, 00:07:58.540 "data_offset": 0, 00:07:58.540 "data_size": 0 00:07:58.540 } 00:07:58.540 ] 00:07:58.540 }' 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.540 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.797 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.797 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.797 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 [2024-11-20 14:24:59.887231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.057 [2024-11-20 14:24:59.887294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.057 [2024-11-20 14:24:59.887310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:59.057 [2024-11-20 14:24:59.887680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.057 [2024-11-20 14:24:59.887923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.057 [2024-11-20 14:24:59.887945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:59.057 [2024-11-20 14:24:59.888261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.057 BaseBdev2 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.057 [ 00:07:59.057 { 00:07:59.057 "name": "BaseBdev2", 00:07:59.057 "aliases": [ 00:07:59.057 "832bac89-6bd8-46d9-aae2-6691de374783" 00:07:59.057 ], 00:07:59.057 "product_name": "Malloc disk", 00:07:59.057 "block_size": 512, 00:07:59.057 "num_blocks": 65536, 00:07:59.057 "uuid": "832bac89-6bd8-46d9-aae2-6691de374783", 00:07:59.057 "assigned_rate_limits": { 00:07:59.057 "rw_ios_per_sec": 0, 00:07:59.057 "rw_mbytes_per_sec": 0, 00:07:59.057 "r_mbytes_per_sec": 0, 00:07:59.057 "w_mbytes_per_sec": 0 00:07:59.057 }, 00:07:59.057 "claimed": true, 00:07:59.057 "claim_type": "exclusive_write", 00:07:59.057 "zoned": false, 00:07:59.057 "supported_io_types": { 00:07:59.057 "read": true, 00:07:59.057 "write": true, 00:07:59.057 "unmap": true, 00:07:59.057 "flush": true, 00:07:59.057 "reset": true, 00:07:59.057 "nvme_admin": false, 00:07:59.057 "nvme_io": false, 00:07:59.057 "nvme_io_md": false, 00:07:59.057 "write_zeroes": true, 00:07:59.057 "zcopy": true, 00:07:59.057 "get_zone_info": false, 00:07:59.057 "zone_management": false, 00:07:59.057 "zone_append": false, 00:07:59.057 "compare": false, 00:07:59.057 "compare_and_write": false, 00:07:59.057 "abort": true, 00:07:59.057 "seek_hole": false, 00:07:59.057 "seek_data": false, 00:07:59.057 "copy": true, 00:07:59.057 "nvme_iov_md": false 00:07:59.057 }, 00:07:59.057 "memory_domains": [ 00:07:59.057 { 00:07:59.057 "dma_device_id": "system", 00:07:59.057 "dma_device_type": 1 00:07:59.057 }, 00:07:59.057 { 00:07:59.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.057 "dma_device_type": 2 00:07:59.057 } 00:07:59.057 ], 00:07:59.057 "driver_specific": {} 00:07:59.057 } 00:07:59.057 ] 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.057 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.058 "name": "Existed_Raid", 00:07:59.058 "uuid": "8238f94b-f64b-4c55-901e-4db0c714f9c0", 00:07:59.058 "strip_size_kb": 64, 00:07:59.058 "state": "online", 00:07:59.058 "raid_level": "raid0", 00:07:59.058 "superblock": false, 00:07:59.058 "num_base_bdevs": 2, 00:07:59.058 "num_base_bdevs_discovered": 2, 00:07:59.058 "num_base_bdevs_operational": 2, 00:07:59.058 "base_bdevs_list": [ 00:07:59.058 { 00:07:59.058 "name": "BaseBdev1", 00:07:59.058 "uuid": "57357f45-4106-418e-9103-17f3a91d6502", 00:07:59.058 "is_configured": true, 00:07:59.058 "data_offset": 0, 00:07:59.058 "data_size": 65536 00:07:59.058 }, 00:07:59.058 { 00:07:59.058 "name": "BaseBdev2", 00:07:59.058 "uuid": "832bac89-6bd8-46d9-aae2-6691de374783", 00:07:59.058 "is_configured": true, 00:07:59.058 "data_offset": 0, 00:07:59.058 "data_size": 65536 00:07:59.058 } 00:07:59.058 ] 00:07:59.058 }' 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.058 14:24:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.625 [2024-11-20 14:25:00.439823] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.625 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.625 "name": "Existed_Raid", 00:07:59.625 "aliases": [ 00:07:59.625 "8238f94b-f64b-4c55-901e-4db0c714f9c0" 00:07:59.625 ], 00:07:59.625 "product_name": "Raid Volume", 00:07:59.625 "block_size": 512, 00:07:59.625 "num_blocks": 131072, 00:07:59.625 "uuid": "8238f94b-f64b-4c55-901e-4db0c714f9c0", 00:07:59.625 "assigned_rate_limits": { 00:07:59.625 "rw_ios_per_sec": 0, 00:07:59.625 "rw_mbytes_per_sec": 0, 00:07:59.625 "r_mbytes_per_sec": 0, 00:07:59.625 "w_mbytes_per_sec": 0 00:07:59.625 }, 00:07:59.626 "claimed": false, 00:07:59.626 "zoned": false, 00:07:59.626 "supported_io_types": { 00:07:59.626 "read": true, 00:07:59.626 "write": true, 00:07:59.626 "unmap": true, 00:07:59.626 "flush": true, 00:07:59.626 "reset": true, 00:07:59.626 "nvme_admin": false, 00:07:59.626 "nvme_io": false, 00:07:59.626 "nvme_io_md": false, 00:07:59.626 "write_zeroes": true, 00:07:59.626 "zcopy": false, 00:07:59.626 "get_zone_info": false, 00:07:59.626 "zone_management": false, 00:07:59.626 "zone_append": false, 00:07:59.626 "compare": false, 00:07:59.626 "compare_and_write": false, 00:07:59.626 "abort": false, 00:07:59.626 "seek_hole": false, 00:07:59.626 "seek_data": false, 00:07:59.626 "copy": false, 00:07:59.626 "nvme_iov_md": false 00:07:59.626 }, 00:07:59.626 "memory_domains": [ 00:07:59.626 { 00:07:59.626 "dma_device_id": "system", 00:07:59.626 "dma_device_type": 1 00:07:59.626 }, 00:07:59.626 { 00:07:59.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.626 "dma_device_type": 2 00:07:59.626 }, 00:07:59.626 { 00:07:59.626 "dma_device_id": "system", 00:07:59.626 "dma_device_type": 1 00:07:59.626 }, 00:07:59.626 { 00:07:59.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.626 "dma_device_type": 2 00:07:59.626 } 00:07:59.626 ], 00:07:59.626 "driver_specific": { 00:07:59.626 "raid": { 00:07:59.626 "uuid": "8238f94b-f64b-4c55-901e-4db0c714f9c0", 00:07:59.626 "strip_size_kb": 64, 00:07:59.626 "state": "online", 00:07:59.626 "raid_level": "raid0", 00:07:59.626 "superblock": false, 00:07:59.626 "num_base_bdevs": 2, 00:07:59.626 "num_base_bdevs_discovered": 2, 00:07:59.626 "num_base_bdevs_operational": 2, 00:07:59.626 "base_bdevs_list": [ 00:07:59.626 { 00:07:59.626 "name": "BaseBdev1", 00:07:59.626 "uuid": "57357f45-4106-418e-9103-17f3a91d6502", 00:07:59.626 "is_configured": true, 00:07:59.626 "data_offset": 0, 00:07:59.626 "data_size": 65536 00:07:59.626 }, 00:07:59.626 { 00:07:59.626 "name": "BaseBdev2", 00:07:59.626 "uuid": "832bac89-6bd8-46d9-aae2-6691de374783", 00:07:59.626 "is_configured": true, 00:07:59.626 "data_offset": 0, 00:07:59.626 "data_size": 65536 00:07:59.626 } 00:07:59.626 ] 00:07:59.626 } 00:07:59.626 } 00:07:59.626 }' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.626 BaseBdev2' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.626 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.626 [2024-11-20 14:25:00.675526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.626 [2024-11-20 14:25:00.675570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.626 [2024-11-20 14:25:00.675656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.884 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.885 "name": "Existed_Raid", 00:07:59.885 "uuid": "8238f94b-f64b-4c55-901e-4db0c714f9c0", 00:07:59.885 "strip_size_kb": 64, 00:07:59.885 "state": "offline", 00:07:59.885 "raid_level": "raid0", 00:07:59.885 "superblock": false, 00:07:59.885 "num_base_bdevs": 2, 00:07:59.885 "num_base_bdevs_discovered": 1, 00:07:59.885 "num_base_bdevs_operational": 1, 00:07:59.885 "base_bdevs_list": [ 00:07:59.885 { 00:07:59.885 "name": null, 00:07:59.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.885 "is_configured": false, 00:07:59.885 "data_offset": 0, 00:07:59.885 "data_size": 65536 00:07:59.885 }, 00:07:59.885 { 00:07:59.885 "name": "BaseBdev2", 00:07:59.885 "uuid": "832bac89-6bd8-46d9-aae2-6691de374783", 00:07:59.885 "is_configured": true, 00:07:59.885 "data_offset": 0, 00:07:59.885 "data_size": 65536 00:07:59.885 } 00:07:59.885 ] 00:07:59.885 }' 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.885 14:25:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.450 [2024-11-20 14:25:01.317909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.450 [2024-11-20 14:25:01.317986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60711 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60711 ']' 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60711 00:08:00.450 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:00.451 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.451 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60711 00:08:00.451 killing process with pid 60711 00:08:00.451 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.451 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.451 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60711' 00:08:00.451 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60711 00:08:00.451 [2024-11-20 14:25:01.493614] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.451 14:25:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60711 00:08:00.708 [2024-11-20 14:25:01.508309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:01.642 ************************************ 00:08:01.642 END TEST raid_state_function_test 00:08:01.642 ************************************ 00:08:01.642 00:08:01.642 real 0m5.470s 00:08:01.642 user 0m8.220s 00:08:01.642 sys 0m0.792s 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.642 14:25:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:01.642 14:25:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.642 14:25:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.642 14:25:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.642 ************************************ 00:08:01.642 START TEST raid_state_function_test_sb 00:08:01.642 ************************************ 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:01.642 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:01.643 Process raid pid: 60964 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60964 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60964' 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60964 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60964 ']' 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.643 14:25:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.901 [2024-11-20 14:25:02.713932] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:01.901 [2024-11-20 14:25:02.714985] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.901 [2024-11-20 14:25:02.907529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.159 [2024-11-20 14:25:03.045200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.416 [2024-11-20 14:25:03.253714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.416 [2024-11-20 14:25:03.253766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.983 [2024-11-20 14:25:03.784024] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:02.983 [2024-11-20 14:25:03.784099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:02.983 [2024-11-20 14:25:03.784117] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.983 [2024-11-20 14:25:03.784133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.983 "name": "Existed_Raid", 00:08:02.983 "uuid": "91804573-917d-4cf4-a529-a6ca2c56bbf8", 00:08:02.983 "strip_size_kb": 64, 00:08:02.983 "state": "configuring", 00:08:02.983 "raid_level": "raid0", 00:08:02.983 "superblock": true, 00:08:02.983 "num_base_bdevs": 2, 00:08:02.983 "num_base_bdevs_discovered": 0, 00:08:02.983 "num_base_bdevs_operational": 2, 00:08:02.983 "base_bdevs_list": [ 00:08:02.983 { 00:08:02.983 "name": "BaseBdev1", 00:08:02.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.983 "is_configured": false, 00:08:02.983 "data_offset": 0, 00:08:02.983 "data_size": 0 00:08:02.983 }, 00:08:02.983 { 00:08:02.983 "name": "BaseBdev2", 00:08:02.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.983 "is_configured": false, 00:08:02.983 "data_offset": 0, 00:08:02.983 "data_size": 0 00:08:02.983 } 00:08:02.983 ] 00:08:02.983 }' 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.983 14:25:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.241 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:03.241 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.241 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.241 [2024-11-20 14:25:04.284088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.241 [2024-11-20 14:25:04.284133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:03.241 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.241 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:03.241 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.241 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.241 [2024-11-20 14:25:04.292063] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.241 [2024-11-20 14:25:04.292116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.241 [2024-11-20 14:25:04.292132] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.241 [2024-11-20 14:25:04.292151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.499 [2024-11-20 14:25:04.337259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.499 BaseBdev1 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.499 [ 00:08:03.499 { 00:08:03.499 "name": "BaseBdev1", 00:08:03.499 "aliases": [ 00:08:03.499 "f2bc4c3d-5859-48de-96fe-059992952adc" 00:08:03.499 ], 00:08:03.499 "product_name": "Malloc disk", 00:08:03.499 "block_size": 512, 00:08:03.499 "num_blocks": 65536, 00:08:03.499 "uuid": "f2bc4c3d-5859-48de-96fe-059992952adc", 00:08:03.499 "assigned_rate_limits": { 00:08:03.499 "rw_ios_per_sec": 0, 00:08:03.499 "rw_mbytes_per_sec": 0, 00:08:03.499 "r_mbytes_per_sec": 0, 00:08:03.499 "w_mbytes_per_sec": 0 00:08:03.499 }, 00:08:03.499 "claimed": true, 00:08:03.499 "claim_type": "exclusive_write", 00:08:03.499 "zoned": false, 00:08:03.499 "supported_io_types": { 00:08:03.499 "read": true, 00:08:03.499 "write": true, 00:08:03.499 "unmap": true, 00:08:03.499 "flush": true, 00:08:03.499 "reset": true, 00:08:03.499 "nvme_admin": false, 00:08:03.499 "nvme_io": false, 00:08:03.499 "nvme_io_md": false, 00:08:03.499 "write_zeroes": true, 00:08:03.499 "zcopy": true, 00:08:03.499 "get_zone_info": false, 00:08:03.499 "zone_management": false, 00:08:03.499 "zone_append": false, 00:08:03.499 "compare": false, 00:08:03.499 "compare_and_write": false, 00:08:03.499 "abort": true, 00:08:03.499 "seek_hole": false, 00:08:03.499 "seek_data": false, 00:08:03.499 "copy": true, 00:08:03.499 "nvme_iov_md": false 00:08:03.499 }, 00:08:03.499 "memory_domains": [ 00:08:03.499 { 00:08:03.499 "dma_device_id": "system", 00:08:03.499 "dma_device_type": 1 00:08:03.499 }, 00:08:03.499 { 00:08:03.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.499 "dma_device_type": 2 00:08:03.499 } 00:08:03.499 ], 00:08:03.499 "driver_specific": {} 00:08:03.499 } 00:08:03.499 ] 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.499 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.500 "name": "Existed_Raid", 00:08:03.500 "uuid": "a2189506-debe-4462-8fd8-586a74edc2a7", 00:08:03.500 "strip_size_kb": 64, 00:08:03.500 "state": "configuring", 00:08:03.500 "raid_level": "raid0", 00:08:03.500 "superblock": true, 00:08:03.500 "num_base_bdevs": 2, 00:08:03.500 "num_base_bdevs_discovered": 1, 00:08:03.500 "num_base_bdevs_operational": 2, 00:08:03.500 "base_bdevs_list": [ 00:08:03.500 { 00:08:03.500 "name": "BaseBdev1", 00:08:03.500 "uuid": "f2bc4c3d-5859-48de-96fe-059992952adc", 00:08:03.500 "is_configured": true, 00:08:03.500 "data_offset": 2048, 00:08:03.500 "data_size": 63488 00:08:03.500 }, 00:08:03.500 { 00:08:03.500 "name": "BaseBdev2", 00:08:03.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.500 "is_configured": false, 00:08:03.500 "data_offset": 0, 00:08:03.500 "data_size": 0 00:08:03.500 } 00:08:03.500 ] 00:08:03.500 }' 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.500 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.069 [2024-11-20 14:25:04.889466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.069 [2024-11-20 14:25:04.889553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.069 [2024-11-20 14:25:04.897530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.069 [2024-11-20 14:25:04.900132] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.069 [2024-11-20 14:25:04.900303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.069 "name": "Existed_Raid", 00:08:04.069 "uuid": "c0913144-834c-4bd1-9b6a-c9d575008e10", 00:08:04.069 "strip_size_kb": 64, 00:08:04.069 "state": "configuring", 00:08:04.069 "raid_level": "raid0", 00:08:04.069 "superblock": true, 00:08:04.069 "num_base_bdevs": 2, 00:08:04.069 "num_base_bdevs_discovered": 1, 00:08:04.069 "num_base_bdevs_operational": 2, 00:08:04.069 "base_bdevs_list": [ 00:08:04.069 { 00:08:04.069 "name": "BaseBdev1", 00:08:04.069 "uuid": "f2bc4c3d-5859-48de-96fe-059992952adc", 00:08:04.069 "is_configured": true, 00:08:04.069 "data_offset": 2048, 00:08:04.069 "data_size": 63488 00:08:04.069 }, 00:08:04.069 { 00:08:04.069 "name": "BaseBdev2", 00:08:04.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.069 "is_configured": false, 00:08:04.069 "data_offset": 0, 00:08:04.069 "data_size": 0 00:08:04.069 } 00:08:04.069 ] 00:08:04.069 }' 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.069 14:25:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.636 [2024-11-20 14:25:05.444166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.636 [2024-11-20 14:25:05.444489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.636 [2024-11-20 14:25:05.444510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.636 BaseBdev2 00:08:04.636 [2024-11-20 14:25:05.444888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:04.636 [2024-11-20 14:25:05.445099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.636 [2024-11-20 14:25:05.445122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:04.636 [2024-11-20 14:25:05.445293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.636 [ 00:08:04.636 { 00:08:04.636 "name": "BaseBdev2", 00:08:04.636 "aliases": [ 00:08:04.636 "fa03fbf5-89d0-4d62-b145-ff2f6614afea" 00:08:04.636 ], 00:08:04.636 "product_name": "Malloc disk", 00:08:04.636 "block_size": 512, 00:08:04.636 "num_blocks": 65536, 00:08:04.636 "uuid": "fa03fbf5-89d0-4d62-b145-ff2f6614afea", 00:08:04.636 "assigned_rate_limits": { 00:08:04.636 "rw_ios_per_sec": 0, 00:08:04.636 "rw_mbytes_per_sec": 0, 00:08:04.636 "r_mbytes_per_sec": 0, 00:08:04.636 "w_mbytes_per_sec": 0 00:08:04.636 }, 00:08:04.636 "claimed": true, 00:08:04.636 "claim_type": "exclusive_write", 00:08:04.636 "zoned": false, 00:08:04.636 "supported_io_types": { 00:08:04.636 "read": true, 00:08:04.636 "write": true, 00:08:04.636 "unmap": true, 00:08:04.636 "flush": true, 00:08:04.636 "reset": true, 00:08:04.636 "nvme_admin": false, 00:08:04.636 "nvme_io": false, 00:08:04.636 "nvme_io_md": false, 00:08:04.636 "write_zeroes": true, 00:08:04.636 "zcopy": true, 00:08:04.636 "get_zone_info": false, 00:08:04.636 "zone_management": false, 00:08:04.636 "zone_append": false, 00:08:04.636 "compare": false, 00:08:04.636 "compare_and_write": false, 00:08:04.636 "abort": true, 00:08:04.636 "seek_hole": false, 00:08:04.636 "seek_data": false, 00:08:04.636 "copy": true, 00:08:04.636 "nvme_iov_md": false 00:08:04.636 }, 00:08:04.636 "memory_domains": [ 00:08:04.636 { 00:08:04.636 "dma_device_id": "system", 00:08:04.636 "dma_device_type": 1 00:08:04.636 }, 00:08:04.636 { 00:08:04.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.636 "dma_device_type": 2 00:08:04.636 } 00:08:04.636 ], 00:08:04.636 "driver_specific": {} 00:08:04.636 } 00:08:04.636 ] 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.636 "name": "Existed_Raid", 00:08:04.636 "uuid": "c0913144-834c-4bd1-9b6a-c9d575008e10", 00:08:04.636 "strip_size_kb": 64, 00:08:04.636 "state": "online", 00:08:04.636 "raid_level": "raid0", 00:08:04.636 "superblock": true, 00:08:04.636 "num_base_bdevs": 2, 00:08:04.636 "num_base_bdevs_discovered": 2, 00:08:04.636 "num_base_bdevs_operational": 2, 00:08:04.636 "base_bdevs_list": [ 00:08:04.636 { 00:08:04.636 "name": "BaseBdev1", 00:08:04.636 "uuid": "f2bc4c3d-5859-48de-96fe-059992952adc", 00:08:04.636 "is_configured": true, 00:08:04.636 "data_offset": 2048, 00:08:04.636 "data_size": 63488 00:08:04.636 }, 00:08:04.636 { 00:08:04.636 "name": "BaseBdev2", 00:08:04.636 "uuid": "fa03fbf5-89d0-4d62-b145-ff2f6614afea", 00:08:04.636 "is_configured": true, 00:08:04.636 "data_offset": 2048, 00:08:04.636 "data_size": 63488 00:08:04.636 } 00:08:04.636 ] 00:08:04.636 }' 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.636 14:25:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:05.203 [2024-11-20 14:25:06.020726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.203 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:05.203 "name": "Existed_Raid", 00:08:05.203 "aliases": [ 00:08:05.203 "c0913144-834c-4bd1-9b6a-c9d575008e10" 00:08:05.203 ], 00:08:05.203 "product_name": "Raid Volume", 00:08:05.203 "block_size": 512, 00:08:05.203 "num_blocks": 126976, 00:08:05.203 "uuid": "c0913144-834c-4bd1-9b6a-c9d575008e10", 00:08:05.203 "assigned_rate_limits": { 00:08:05.203 "rw_ios_per_sec": 0, 00:08:05.203 "rw_mbytes_per_sec": 0, 00:08:05.203 "r_mbytes_per_sec": 0, 00:08:05.203 "w_mbytes_per_sec": 0 00:08:05.203 }, 00:08:05.203 "claimed": false, 00:08:05.203 "zoned": false, 00:08:05.203 "supported_io_types": { 00:08:05.203 "read": true, 00:08:05.203 "write": true, 00:08:05.203 "unmap": true, 00:08:05.203 "flush": true, 00:08:05.203 "reset": true, 00:08:05.203 "nvme_admin": false, 00:08:05.203 "nvme_io": false, 00:08:05.203 "nvme_io_md": false, 00:08:05.203 "write_zeroes": true, 00:08:05.203 "zcopy": false, 00:08:05.203 "get_zone_info": false, 00:08:05.203 "zone_management": false, 00:08:05.203 "zone_append": false, 00:08:05.203 "compare": false, 00:08:05.203 "compare_and_write": false, 00:08:05.203 "abort": false, 00:08:05.203 "seek_hole": false, 00:08:05.203 "seek_data": false, 00:08:05.203 "copy": false, 00:08:05.203 "nvme_iov_md": false 00:08:05.203 }, 00:08:05.203 "memory_domains": [ 00:08:05.203 { 00:08:05.203 "dma_device_id": "system", 00:08:05.203 "dma_device_type": 1 00:08:05.203 }, 00:08:05.203 { 00:08:05.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.203 "dma_device_type": 2 00:08:05.203 }, 00:08:05.203 { 00:08:05.203 "dma_device_id": "system", 00:08:05.203 "dma_device_type": 1 00:08:05.203 }, 00:08:05.203 { 00:08:05.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.203 "dma_device_type": 2 00:08:05.203 } 00:08:05.203 ], 00:08:05.203 "driver_specific": { 00:08:05.203 "raid": { 00:08:05.203 "uuid": "c0913144-834c-4bd1-9b6a-c9d575008e10", 00:08:05.203 "strip_size_kb": 64, 00:08:05.203 "state": "online", 00:08:05.203 "raid_level": "raid0", 00:08:05.203 "superblock": true, 00:08:05.203 "num_base_bdevs": 2, 00:08:05.203 "num_base_bdevs_discovered": 2, 00:08:05.203 "num_base_bdevs_operational": 2, 00:08:05.203 "base_bdevs_list": [ 00:08:05.203 { 00:08:05.203 "name": "BaseBdev1", 00:08:05.203 "uuid": "f2bc4c3d-5859-48de-96fe-059992952adc", 00:08:05.203 "is_configured": true, 00:08:05.203 "data_offset": 2048, 00:08:05.203 "data_size": 63488 00:08:05.203 }, 00:08:05.203 { 00:08:05.203 "name": "BaseBdev2", 00:08:05.203 "uuid": "fa03fbf5-89d0-4d62-b145-ff2f6614afea", 00:08:05.203 "is_configured": true, 00:08:05.203 "data_offset": 2048, 00:08:05.203 "data_size": 63488 00:08:05.203 } 00:08:05.203 ] 00:08:05.203 } 00:08:05.203 } 00:08:05.203 }' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:05.204 BaseBdev2' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.204 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.463 [2024-11-20 14:25:06.272456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.463 [2024-11-20 14:25:06.272639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:05.463 [2024-11-20 14:25:06.272732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.463 "name": "Existed_Raid", 00:08:05.463 "uuid": "c0913144-834c-4bd1-9b6a-c9d575008e10", 00:08:05.463 "strip_size_kb": 64, 00:08:05.463 "state": "offline", 00:08:05.463 "raid_level": "raid0", 00:08:05.463 "superblock": true, 00:08:05.463 "num_base_bdevs": 2, 00:08:05.463 "num_base_bdevs_discovered": 1, 00:08:05.463 "num_base_bdevs_operational": 1, 00:08:05.463 "base_bdevs_list": [ 00:08:05.463 { 00:08:05.463 "name": null, 00:08:05.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.463 "is_configured": false, 00:08:05.463 "data_offset": 0, 00:08:05.463 "data_size": 63488 00:08:05.463 }, 00:08:05.463 { 00:08:05.463 "name": "BaseBdev2", 00:08:05.463 "uuid": "fa03fbf5-89d0-4d62-b145-ff2f6614afea", 00:08:05.463 "is_configured": true, 00:08:05.463 "data_offset": 2048, 00:08:05.463 "data_size": 63488 00:08:05.463 } 00:08:05.463 ] 00:08:05.463 }' 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.463 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.029 14:25:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.029 [2024-11-20 14:25:06.926239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.029 [2024-11-20 14:25:06.926462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60964 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60964 ']' 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60964 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.029 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60964 00:08:06.286 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.286 killing process with pid 60964 00:08:06.286 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.286 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60964' 00:08:06.286 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60964 00:08:06.286 [2024-11-20 14:25:07.101556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.286 14:25:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60964 00:08:06.286 [2024-11-20 14:25:07.116193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.221 14:25:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:07.221 00:08:07.221 real 0m5.548s 00:08:07.221 user 0m8.434s 00:08:07.221 sys 0m0.727s 00:08:07.221 ************************************ 00:08:07.221 END TEST raid_state_function_test_sb 00:08:07.221 ************************************ 00:08:07.221 14:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.221 14:25:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 14:25:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:07.221 14:25:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:07.221 14:25:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.221 14:25:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.221 ************************************ 00:08:07.221 START TEST raid_superblock_test 00:08:07.221 ************************************ 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61221 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61221 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61221 ']' 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.221 14:25:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.479 [2024-11-20 14:25:08.331737] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:07.479 [2024-11-20 14:25:08.332084] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61221 ] 00:08:07.479 [2024-11-20 14:25:08.518046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.736 [2024-11-20 14:25:08.654229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.994 [2024-11-20 14:25:08.883168] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.994 [2024-11-20 14:25:08.883519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.251 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 malloc1 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 [2024-11-20 14:25:09.355777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.509 [2024-11-20 14:25:09.355992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.509 [2024-11-20 14:25:09.356074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:08.509 [2024-11-20 14:25:09.356262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.509 [2024-11-20 14:25:09.359201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.509 [2024-11-20 14:25:09.359374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.509 pt1 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 malloc2 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 [2024-11-20 14:25:09.412648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.509 [2024-11-20 14:25:09.412720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.509 [2024-11-20 14:25:09.412761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:08.509 [2024-11-20 14:25:09.412786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.509 [2024-11-20 14:25:09.415579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.509 [2024-11-20 14:25:09.415646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.509 pt2 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 [2024-11-20 14:25:09.420714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.509 [2024-11-20 14:25:09.423229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.509 [2024-11-20 14:25:09.423447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:08.509 [2024-11-20 14:25:09.423467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:08.509 [2024-11-20 14:25:09.423802] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.509 [2024-11-20 14:25:09.424010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:08.509 [2024-11-20 14:25:09.424031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:08.509 [2024-11-20 14:25:09.424228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.509 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.509 "name": "raid_bdev1", 00:08:08.509 "uuid": "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83", 00:08:08.509 "strip_size_kb": 64, 00:08:08.509 "state": "online", 00:08:08.509 "raid_level": "raid0", 00:08:08.509 "superblock": true, 00:08:08.509 "num_base_bdevs": 2, 00:08:08.509 "num_base_bdevs_discovered": 2, 00:08:08.509 "num_base_bdevs_operational": 2, 00:08:08.509 "base_bdevs_list": [ 00:08:08.509 { 00:08:08.509 "name": "pt1", 00:08:08.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:08.509 "is_configured": true, 00:08:08.509 "data_offset": 2048, 00:08:08.509 "data_size": 63488 00:08:08.509 }, 00:08:08.509 { 00:08:08.509 "name": "pt2", 00:08:08.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.510 "is_configured": true, 00:08:08.510 "data_offset": 2048, 00:08:08.510 "data_size": 63488 00:08:08.510 } 00:08:08.510 ] 00:08:08.510 }' 00:08:08.510 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.510 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.074 [2024-11-20 14:25:09.905164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.074 "name": "raid_bdev1", 00:08:09.074 "aliases": [ 00:08:09.074 "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83" 00:08:09.074 ], 00:08:09.074 "product_name": "Raid Volume", 00:08:09.074 "block_size": 512, 00:08:09.074 "num_blocks": 126976, 00:08:09.074 "uuid": "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83", 00:08:09.074 "assigned_rate_limits": { 00:08:09.074 "rw_ios_per_sec": 0, 00:08:09.074 "rw_mbytes_per_sec": 0, 00:08:09.074 "r_mbytes_per_sec": 0, 00:08:09.074 "w_mbytes_per_sec": 0 00:08:09.074 }, 00:08:09.074 "claimed": false, 00:08:09.074 "zoned": false, 00:08:09.074 "supported_io_types": { 00:08:09.074 "read": true, 00:08:09.074 "write": true, 00:08:09.074 "unmap": true, 00:08:09.074 "flush": true, 00:08:09.074 "reset": true, 00:08:09.074 "nvme_admin": false, 00:08:09.074 "nvme_io": false, 00:08:09.074 "nvme_io_md": false, 00:08:09.074 "write_zeroes": true, 00:08:09.074 "zcopy": false, 00:08:09.074 "get_zone_info": false, 00:08:09.074 "zone_management": false, 00:08:09.074 "zone_append": false, 00:08:09.074 "compare": false, 00:08:09.074 "compare_and_write": false, 00:08:09.074 "abort": false, 00:08:09.074 "seek_hole": false, 00:08:09.074 "seek_data": false, 00:08:09.074 "copy": false, 00:08:09.074 "nvme_iov_md": false 00:08:09.074 }, 00:08:09.074 "memory_domains": [ 00:08:09.074 { 00:08:09.074 "dma_device_id": "system", 00:08:09.074 "dma_device_type": 1 00:08:09.074 }, 00:08:09.074 { 00:08:09.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.074 "dma_device_type": 2 00:08:09.074 }, 00:08:09.074 { 00:08:09.074 "dma_device_id": "system", 00:08:09.074 "dma_device_type": 1 00:08:09.074 }, 00:08:09.074 { 00:08:09.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.074 "dma_device_type": 2 00:08:09.074 } 00:08:09.074 ], 00:08:09.074 "driver_specific": { 00:08:09.074 "raid": { 00:08:09.074 "uuid": "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83", 00:08:09.074 "strip_size_kb": 64, 00:08:09.074 "state": "online", 00:08:09.074 "raid_level": "raid0", 00:08:09.074 "superblock": true, 00:08:09.074 "num_base_bdevs": 2, 00:08:09.074 "num_base_bdevs_discovered": 2, 00:08:09.074 "num_base_bdevs_operational": 2, 00:08:09.074 "base_bdevs_list": [ 00:08:09.074 { 00:08:09.074 "name": "pt1", 00:08:09.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.074 "is_configured": true, 00:08:09.074 "data_offset": 2048, 00:08:09.074 "data_size": 63488 00:08:09.074 }, 00:08:09.074 { 00:08:09.074 "name": "pt2", 00:08:09.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.074 "is_configured": true, 00:08:09.074 "data_offset": 2048, 00:08:09.074 "data_size": 63488 00:08:09.074 } 00:08:09.074 ] 00:08:09.074 } 00:08:09.074 } 00:08:09.074 }' 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.074 pt2' 00:08:09.074 14:25:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.074 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 [2024-11-20 14:25:10.149190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5e4fd136-8367-4e06-b3a6-d1bacb9a1a83 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5e4fd136-8367-4e06-b3a6-d1bacb9a1a83 ']' 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 [2024-11-20 14:25:10.196844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.333 [2024-11-20 14:25:10.196997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.333 [2024-11-20 14:25:10.197220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.333 [2024-11-20 14:25:10.197397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.333 [2024-11-20 14:25:10.197588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.333 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.333 [2024-11-20 14:25:10.340921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:09.333 [2024-11-20 14:25:10.343444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:09.334 [2024-11-20 14:25:10.343534] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:09.334 [2024-11-20 14:25:10.343611] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:09.334 [2024-11-20 14:25:10.343653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.334 [2024-11-20 14:25:10.343674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:09.334 request: 00:08:09.334 { 00:08:09.334 "name": "raid_bdev1", 00:08:09.334 "raid_level": "raid0", 00:08:09.334 "base_bdevs": [ 00:08:09.334 "malloc1", 00:08:09.334 "malloc2" 00:08:09.334 ], 00:08:09.334 "strip_size_kb": 64, 00:08:09.334 "superblock": false, 00:08:09.334 "method": "bdev_raid_create", 00:08:09.334 "req_id": 1 00:08:09.334 } 00:08:09.334 Got JSON-RPC error response 00:08:09.334 response: 00:08:09.334 { 00:08:09.334 "code": -17, 00:08:09.334 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:09.334 } 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:09.334 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.592 [2024-11-20 14:25:10.396898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.592 [2024-11-20 14:25:10.396965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.592 [2024-11-20 14:25:10.396993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:09.592 [2024-11-20 14:25:10.397010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.592 [2024-11-20 14:25:10.400058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.592 [2024-11-20 14:25:10.400107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.592 [2024-11-20 14:25:10.400207] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:09.592 [2024-11-20 14:25:10.400279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:09.592 pt1 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.592 "name": "raid_bdev1", 00:08:09.592 "uuid": "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83", 00:08:09.592 "strip_size_kb": 64, 00:08:09.592 "state": "configuring", 00:08:09.592 "raid_level": "raid0", 00:08:09.592 "superblock": true, 00:08:09.592 "num_base_bdevs": 2, 00:08:09.592 "num_base_bdevs_discovered": 1, 00:08:09.592 "num_base_bdevs_operational": 2, 00:08:09.592 "base_bdevs_list": [ 00:08:09.592 { 00:08:09.592 "name": "pt1", 00:08:09.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.592 "is_configured": true, 00:08:09.592 "data_offset": 2048, 00:08:09.592 "data_size": 63488 00:08:09.592 }, 00:08:09.592 { 00:08:09.592 "name": null, 00:08:09.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.592 "is_configured": false, 00:08:09.592 "data_offset": 2048, 00:08:09.592 "data_size": 63488 00:08:09.592 } 00:08:09.592 ] 00:08:09.592 }' 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.592 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.159 [2024-11-20 14:25:10.953077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:10.159 [2024-11-20 14:25:10.953319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.159 [2024-11-20 14:25:10.953364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:10.159 [2024-11-20 14:25:10.953384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.159 [2024-11-20 14:25:10.954026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.159 [2024-11-20 14:25:10.954065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:10.159 [2024-11-20 14:25:10.954186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:10.159 [2024-11-20 14:25:10.954230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.159 [2024-11-20 14:25:10.954381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.159 [2024-11-20 14:25:10.954403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.159 [2024-11-20 14:25:10.954725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:10.159 [2024-11-20 14:25:10.954924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.159 [2024-11-20 14:25:10.954940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:10.159 [2024-11-20 14:25:10.955109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.159 pt2 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.159 14:25:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.159 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.159 "name": "raid_bdev1", 00:08:10.159 "uuid": "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83", 00:08:10.159 "strip_size_kb": 64, 00:08:10.159 "state": "online", 00:08:10.159 "raid_level": "raid0", 00:08:10.159 "superblock": true, 00:08:10.159 "num_base_bdevs": 2, 00:08:10.159 "num_base_bdevs_discovered": 2, 00:08:10.159 "num_base_bdevs_operational": 2, 00:08:10.159 "base_bdevs_list": [ 00:08:10.159 { 00:08:10.159 "name": "pt1", 00:08:10.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.159 "is_configured": true, 00:08:10.159 "data_offset": 2048, 00:08:10.159 "data_size": 63488 00:08:10.159 }, 00:08:10.159 { 00:08:10.159 "name": "pt2", 00:08:10.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.159 "is_configured": true, 00:08:10.159 "data_offset": 2048, 00:08:10.159 "data_size": 63488 00:08:10.159 } 00:08:10.159 ] 00:08:10.159 }' 00:08:10.159 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.159 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.418 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.418 [2024-11-20 14:25:11.469518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.677 "name": "raid_bdev1", 00:08:10.677 "aliases": [ 00:08:10.677 "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83" 00:08:10.677 ], 00:08:10.677 "product_name": "Raid Volume", 00:08:10.677 "block_size": 512, 00:08:10.677 "num_blocks": 126976, 00:08:10.677 "uuid": "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83", 00:08:10.677 "assigned_rate_limits": { 00:08:10.677 "rw_ios_per_sec": 0, 00:08:10.677 "rw_mbytes_per_sec": 0, 00:08:10.677 "r_mbytes_per_sec": 0, 00:08:10.677 "w_mbytes_per_sec": 0 00:08:10.677 }, 00:08:10.677 "claimed": false, 00:08:10.677 "zoned": false, 00:08:10.677 "supported_io_types": { 00:08:10.677 "read": true, 00:08:10.677 "write": true, 00:08:10.677 "unmap": true, 00:08:10.677 "flush": true, 00:08:10.677 "reset": true, 00:08:10.677 "nvme_admin": false, 00:08:10.677 "nvme_io": false, 00:08:10.677 "nvme_io_md": false, 00:08:10.677 "write_zeroes": true, 00:08:10.677 "zcopy": false, 00:08:10.677 "get_zone_info": false, 00:08:10.677 "zone_management": false, 00:08:10.677 "zone_append": false, 00:08:10.677 "compare": false, 00:08:10.677 "compare_and_write": false, 00:08:10.677 "abort": false, 00:08:10.677 "seek_hole": false, 00:08:10.677 "seek_data": false, 00:08:10.677 "copy": false, 00:08:10.677 "nvme_iov_md": false 00:08:10.677 }, 00:08:10.677 "memory_domains": [ 00:08:10.677 { 00:08:10.677 "dma_device_id": "system", 00:08:10.677 "dma_device_type": 1 00:08:10.677 }, 00:08:10.677 { 00:08:10.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.677 "dma_device_type": 2 00:08:10.677 }, 00:08:10.677 { 00:08:10.677 "dma_device_id": "system", 00:08:10.677 "dma_device_type": 1 00:08:10.677 }, 00:08:10.677 { 00:08:10.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.677 "dma_device_type": 2 00:08:10.677 } 00:08:10.677 ], 00:08:10.677 "driver_specific": { 00:08:10.677 "raid": { 00:08:10.677 "uuid": "5e4fd136-8367-4e06-b3a6-d1bacb9a1a83", 00:08:10.677 "strip_size_kb": 64, 00:08:10.677 "state": "online", 00:08:10.677 "raid_level": "raid0", 00:08:10.677 "superblock": true, 00:08:10.677 "num_base_bdevs": 2, 00:08:10.677 "num_base_bdevs_discovered": 2, 00:08:10.677 "num_base_bdevs_operational": 2, 00:08:10.677 "base_bdevs_list": [ 00:08:10.677 { 00:08:10.677 "name": "pt1", 00:08:10.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.677 "is_configured": true, 00:08:10.677 "data_offset": 2048, 00:08:10.677 "data_size": 63488 00:08:10.677 }, 00:08:10.677 { 00:08:10.677 "name": "pt2", 00:08:10.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.677 "is_configured": true, 00:08:10.677 "data_offset": 2048, 00:08:10.677 "data_size": 63488 00:08:10.677 } 00:08:10.677 ] 00:08:10.677 } 00:08:10.677 } 00:08:10.677 }' 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:10.677 pt2' 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:10.677 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.678 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.678 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.936 [2024-11-20 14:25:11.773566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5e4fd136-8367-4e06-b3a6-d1bacb9a1a83 '!=' 5e4fd136-8367-4e06-b3a6-d1bacb9a1a83 ']' 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61221 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61221 ']' 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61221 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61221 00:08:10.936 killing process with pid 61221 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61221' 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61221 00:08:10.936 [2024-11-20 14:25:11.858498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.936 14:25:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61221 00:08:10.936 [2024-11-20 14:25:11.858619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.936 [2024-11-20 14:25:11.858704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.936 [2024-11-20 14:25:11.858730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:11.195 [2024-11-20 14:25:12.039781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.129 14:25:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:12.129 00:08:12.129 real 0m4.877s 00:08:12.129 user 0m7.144s 00:08:12.129 sys 0m0.760s 00:08:12.129 14:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.129 ************************************ 00:08:12.129 END TEST raid_superblock_test 00:08:12.129 ************************************ 00:08:12.129 14:25:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.129 14:25:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:12.129 14:25:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.129 14:25:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.129 14:25:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.129 ************************************ 00:08:12.129 START TEST raid_read_error_test 00:08:12.129 ************************************ 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.129 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VJkVt81PAQ 00:08:12.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61434 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61434 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61434 ']' 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.130 14:25:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.388 [2024-11-20 14:25:13.288756] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:12.388 [2024-11-20 14:25:13.289114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61434 ] 00:08:12.647 [2024-11-20 14:25:13.477162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.647 [2024-11-20 14:25:13.635078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.906 [2024-11-20 14:25:13.884022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.906 [2024-11-20 14:25:13.884397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.473 BaseBdev1_malloc 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.473 true 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.473 [2024-11-20 14:25:14.342470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:13.473 [2024-11-20 14:25:14.342552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.473 [2024-11-20 14:25:14.342584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:13.473 [2024-11-20 14:25:14.342602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.473 [2024-11-20 14:25:14.345480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.473 [2024-11-20 14:25:14.345533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:13.473 BaseBdev1 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.473 BaseBdev2_malloc 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.473 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.473 true 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.474 [2024-11-20 14:25:14.403191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:13.474 [2024-11-20 14:25:14.403277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.474 [2024-11-20 14:25:14.403305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:13.474 [2024-11-20 14:25:14.403322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.474 [2024-11-20 14:25:14.406319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.474 [2024-11-20 14:25:14.406370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:13.474 BaseBdev2 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.474 [2024-11-20 14:25:14.411360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.474 [2024-11-20 14:25:14.413967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.474 [2024-11-20 14:25:14.414256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.474 [2024-11-20 14:25:14.414284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.474 [2024-11-20 14:25:14.414665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:13.474 [2024-11-20 14:25:14.414903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.474 [2024-11-20 14:25:14.414926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:13.474 [2024-11-20 14:25:14.415156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.474 "name": "raid_bdev1", 00:08:13.474 "uuid": "6272659a-c5ce-4bd1-898c-78cdc46e8f55", 00:08:13.474 "strip_size_kb": 64, 00:08:13.474 "state": "online", 00:08:13.474 "raid_level": "raid0", 00:08:13.474 "superblock": true, 00:08:13.474 "num_base_bdevs": 2, 00:08:13.474 "num_base_bdevs_discovered": 2, 00:08:13.474 "num_base_bdevs_operational": 2, 00:08:13.474 "base_bdevs_list": [ 00:08:13.474 { 00:08:13.474 "name": "BaseBdev1", 00:08:13.474 "uuid": "781e728e-25f1-5ac6-b397-82028764ec35", 00:08:13.474 "is_configured": true, 00:08:13.474 "data_offset": 2048, 00:08:13.474 "data_size": 63488 00:08:13.474 }, 00:08:13.474 { 00:08:13.474 "name": "BaseBdev2", 00:08:13.474 "uuid": "d3bfefe2-55d3-5692-b09a-814ea066014d", 00:08:13.474 "is_configured": true, 00:08:13.474 "data_offset": 2048, 00:08:13.474 "data_size": 63488 00:08:13.474 } 00:08:13.474 ] 00:08:13.474 }' 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.474 14:25:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.042 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:14.042 14:25:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:14.042 [2024-11-20 14:25:15.036980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.977 "name": "raid_bdev1", 00:08:14.977 "uuid": "6272659a-c5ce-4bd1-898c-78cdc46e8f55", 00:08:14.977 "strip_size_kb": 64, 00:08:14.977 "state": "online", 00:08:14.977 "raid_level": "raid0", 00:08:14.977 "superblock": true, 00:08:14.977 "num_base_bdevs": 2, 00:08:14.977 "num_base_bdevs_discovered": 2, 00:08:14.977 "num_base_bdevs_operational": 2, 00:08:14.977 "base_bdevs_list": [ 00:08:14.977 { 00:08:14.977 "name": "BaseBdev1", 00:08:14.977 "uuid": "781e728e-25f1-5ac6-b397-82028764ec35", 00:08:14.977 "is_configured": true, 00:08:14.977 "data_offset": 2048, 00:08:14.977 "data_size": 63488 00:08:14.977 }, 00:08:14.977 { 00:08:14.977 "name": "BaseBdev2", 00:08:14.977 "uuid": "d3bfefe2-55d3-5692-b09a-814ea066014d", 00:08:14.977 "is_configured": true, 00:08:14.977 "data_offset": 2048, 00:08:14.977 "data_size": 63488 00:08:14.977 } 00:08:14.977 ] 00:08:14.977 }' 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.977 14:25:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.601 14:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.601 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.601 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.601 [2024-11-20 14:25:16.441188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.601 [2024-11-20 14:25:16.441479] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.601 [2024-11-20 14:25:16.444956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.601 [2024-11-20 14:25:16.445017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.601 [2024-11-20 14:25:16.445064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.601 [2024-11-20 14:25:16.445083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:15.601 { 00:08:15.601 "results": [ 00:08:15.601 { 00:08:15.601 "job": "raid_bdev1", 00:08:15.601 "core_mask": "0x1", 00:08:15.601 "workload": "randrw", 00:08:15.601 "percentage": 50, 00:08:15.601 "status": "finished", 00:08:15.601 "queue_depth": 1, 00:08:15.601 "io_size": 131072, 00:08:15.601 "runtime": 1.40175, 00:08:15.601 "iops": 10398.430533261993, 00:08:15.602 "mibps": 1299.8038166577492, 00:08:15.602 "io_failed": 1, 00:08:15.602 "io_timeout": 0, 00:08:15.602 "avg_latency_us": 134.86278832781406, 00:08:15.602 "min_latency_us": 43.75272727272727, 00:08:15.602 "max_latency_us": 1921.3963636363637 00:08:15.602 } 00:08:15.602 ], 00:08:15.602 "core_count": 1 00:08:15.602 } 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61434 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61434 ']' 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61434 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61434 00:08:15.602 killing process with pid 61434 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61434' 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61434 00:08:15.602 14:25:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61434 00:08:15.602 [2024-11-20 14:25:16.480457] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.602 [2024-11-20 14:25:16.606427] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VJkVt81PAQ 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:16.984 ************************************ 00:08:16.984 END TEST raid_read_error_test 00:08:16.984 ************************************ 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:16.984 00:08:16.984 real 0m4.593s 00:08:16.984 user 0m5.721s 00:08:16.984 sys 0m0.609s 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.984 14:25:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.984 14:25:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:16.984 14:25:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:16.984 14:25:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.984 14:25:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.984 ************************************ 00:08:16.984 START TEST raid_write_error_test 00:08:16.984 ************************************ 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ecIgEQMEoG 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61585 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61585 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61585 ']' 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.984 14:25:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.984 [2024-11-20 14:25:17.940495] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:16.984 [2024-11-20 14:25:17.940713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61585 ] 00:08:17.243 [2024-11-20 14:25:18.134139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.243 [2024-11-20 14:25:18.290023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.501 [2024-11-20 14:25:18.525350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.501 [2024-11-20 14:25:18.525754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.068 14:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.068 14:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.068 14:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.068 14:25:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.068 14:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.068 14:25:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 BaseBdev1_malloc 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 true 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 [2024-11-20 14:25:19.035234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.068 [2024-11-20 14:25:19.035308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.068 [2024-11-20 14:25:19.035346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.068 [2024-11-20 14:25:19.035364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.068 [2024-11-20 14:25:19.038276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.068 [2024-11-20 14:25:19.038330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.068 BaseBdev1 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.068 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.068 BaseBdev2_malloc 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 true 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 [2024-11-20 14:25:19.091744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.069 [2024-11-20 14:25:19.091821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.069 [2024-11-20 14:25:19.091851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.069 [2024-11-20 14:25:19.091869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.069 [2024-11-20 14:25:19.094822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.069 [2024-11-20 14:25:19.094874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.069 BaseBdev2 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.069 [2024-11-20 14:25:19.099874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.069 [2024-11-20 14:25:19.102459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.069 [2024-11-20 14:25:19.102746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.069 [2024-11-20 14:25:19.102775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.069 [2024-11-20 14:25:19.103117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:18.069 [2024-11-20 14:25:19.103351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.069 [2024-11-20 14:25:19.103379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:18.069 [2024-11-20 14:25:19.103597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.069 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.327 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.327 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.327 "name": "raid_bdev1", 00:08:18.327 "uuid": "aa946740-9eec-4898-82ff-b2e661721cec", 00:08:18.327 "strip_size_kb": 64, 00:08:18.327 "state": "online", 00:08:18.327 "raid_level": "raid0", 00:08:18.327 "superblock": true, 00:08:18.327 "num_base_bdevs": 2, 00:08:18.327 "num_base_bdevs_discovered": 2, 00:08:18.327 "num_base_bdevs_operational": 2, 00:08:18.327 "base_bdevs_list": [ 00:08:18.327 { 00:08:18.327 "name": "BaseBdev1", 00:08:18.327 "uuid": "6a6fadc8-e341-549f-a0cf-98e49c47bc95", 00:08:18.327 "is_configured": true, 00:08:18.327 "data_offset": 2048, 00:08:18.327 "data_size": 63488 00:08:18.327 }, 00:08:18.327 { 00:08:18.327 "name": "BaseBdev2", 00:08:18.327 "uuid": "27fc7891-8492-5a8a-af5d-6e484c3740cb", 00:08:18.327 "is_configured": true, 00:08:18.327 "data_offset": 2048, 00:08:18.327 "data_size": 63488 00:08:18.327 } 00:08:18.327 ] 00:08:18.327 }' 00:08:18.327 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.327 14:25:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.585 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.585 14:25:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:18.843 [2024-11-20 14:25:19.741440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.777 "name": "raid_bdev1", 00:08:19.777 "uuid": "aa946740-9eec-4898-82ff-b2e661721cec", 00:08:19.777 "strip_size_kb": 64, 00:08:19.777 "state": "online", 00:08:19.777 "raid_level": "raid0", 00:08:19.777 "superblock": true, 00:08:19.777 "num_base_bdevs": 2, 00:08:19.777 "num_base_bdevs_discovered": 2, 00:08:19.777 "num_base_bdevs_operational": 2, 00:08:19.777 "base_bdevs_list": [ 00:08:19.777 { 00:08:19.777 "name": "BaseBdev1", 00:08:19.777 "uuid": "6a6fadc8-e341-549f-a0cf-98e49c47bc95", 00:08:19.777 "is_configured": true, 00:08:19.777 "data_offset": 2048, 00:08:19.777 "data_size": 63488 00:08:19.777 }, 00:08:19.777 { 00:08:19.777 "name": "BaseBdev2", 00:08:19.777 "uuid": "27fc7891-8492-5a8a-af5d-6e484c3740cb", 00:08:19.777 "is_configured": true, 00:08:19.777 "data_offset": 2048, 00:08:19.777 "data_size": 63488 00:08:19.777 } 00:08:19.777 ] 00:08:19.777 }' 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.777 14:25:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.345 [2024-11-20 14:25:21.176513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.345 [2024-11-20 14:25:21.176874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.345 [2024-11-20 14:25:21.180381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.345 [2024-11-20 14:25:21.180509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.345 [2024-11-20 14:25:21.180569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.345 [2024-11-20 14:25:21.180590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.345 { 00:08:20.345 "results": [ 00:08:20.345 { 00:08:20.345 "job": "raid_bdev1", 00:08:20.345 "core_mask": "0x1", 00:08:20.345 "workload": "randrw", 00:08:20.345 "percentage": 50, 00:08:20.345 "status": "finished", 00:08:20.345 "queue_depth": 1, 00:08:20.345 "io_size": 131072, 00:08:20.345 "runtime": 1.433029, 00:08:20.345 "iops": 10361.269730061289, 00:08:20.345 "mibps": 1295.1587162576611, 00:08:20.345 "io_failed": 1, 00:08:20.345 "io_timeout": 0, 00:08:20.345 "avg_latency_us": 134.9957592491689, 00:08:20.345 "min_latency_us": 43.52, 00:08:20.345 "max_latency_us": 1876.7127272727273 00:08:20.345 } 00:08:20.345 ], 00:08:20.345 "core_count": 1 00:08:20.345 } 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61585 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61585 ']' 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61585 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61585 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.345 killing process with pid 61585 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61585' 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61585 00:08:20.345 [2024-11-20 14:25:21.225894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.345 14:25:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61585 00:08:20.345 [2024-11-20 14:25:21.351609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ecIgEQMEoG 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:08:21.720 00:08:21.720 real 0m4.649s 00:08:21.720 user 0m5.885s 00:08:21.720 sys 0m0.575s 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.720 14:25:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.720 ************************************ 00:08:21.720 END TEST raid_write_error_test 00:08:21.720 ************************************ 00:08:21.720 14:25:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:21.720 14:25:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:21.720 14:25:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.720 14:25:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.720 14:25:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.720 ************************************ 00:08:21.720 START TEST raid_state_function_test 00:08:21.720 ************************************ 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:21.720 Process raid pid: 61723 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61723 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61723' 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61723 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61723 ']' 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.720 14:25:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.720 [2024-11-20 14:25:22.614048] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:21.720 [2024-11-20 14:25:22.614447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.979 [2024-11-20 14:25:22.791492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.979 [2024-11-20 14:25:22.926368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.237 [2024-11-20 14:25:23.136900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.237 [2024-11-20 14:25:23.137186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.803 [2024-11-20 14:25:23.649043] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.803 [2024-11-20 14:25:23.649159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.803 [2024-11-20 14:25:23.649178] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.803 [2024-11-20 14:25:23.649196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.803 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.803 "name": "Existed_Raid", 00:08:22.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.804 "strip_size_kb": 64, 00:08:22.804 "state": "configuring", 00:08:22.804 "raid_level": "concat", 00:08:22.804 "superblock": false, 00:08:22.804 "num_base_bdevs": 2, 00:08:22.804 "num_base_bdevs_discovered": 0, 00:08:22.804 "num_base_bdevs_operational": 2, 00:08:22.804 "base_bdevs_list": [ 00:08:22.804 { 00:08:22.804 "name": "BaseBdev1", 00:08:22.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.804 "is_configured": false, 00:08:22.804 "data_offset": 0, 00:08:22.804 "data_size": 0 00:08:22.804 }, 00:08:22.804 { 00:08:22.804 "name": "BaseBdev2", 00:08:22.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.804 "is_configured": false, 00:08:22.804 "data_offset": 0, 00:08:22.804 "data_size": 0 00:08:22.804 } 00:08:22.804 ] 00:08:22.804 }' 00:08:22.804 14:25:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.804 14:25:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.372 [2024-11-20 14:25:24.189120] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.372 [2024-11-20 14:25:24.189197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.372 [2024-11-20 14:25:24.197104] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.372 [2024-11-20 14:25:24.197185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.372 [2024-11-20 14:25:24.197213] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.372 [2024-11-20 14:25:24.197233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.372 [2024-11-20 14:25:24.242479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.372 BaseBdev1 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.372 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.372 [ 00:08:23.372 { 00:08:23.372 "name": "BaseBdev1", 00:08:23.372 "aliases": [ 00:08:23.372 "db7b16ff-f5b0-4ff1-a757-5cac24a7a7b2" 00:08:23.372 ], 00:08:23.372 "product_name": "Malloc disk", 00:08:23.372 "block_size": 512, 00:08:23.372 "num_blocks": 65536, 00:08:23.372 "uuid": "db7b16ff-f5b0-4ff1-a757-5cac24a7a7b2", 00:08:23.372 "assigned_rate_limits": { 00:08:23.372 "rw_ios_per_sec": 0, 00:08:23.373 "rw_mbytes_per_sec": 0, 00:08:23.373 "r_mbytes_per_sec": 0, 00:08:23.373 "w_mbytes_per_sec": 0 00:08:23.373 }, 00:08:23.373 "claimed": true, 00:08:23.373 "claim_type": "exclusive_write", 00:08:23.373 "zoned": false, 00:08:23.373 "supported_io_types": { 00:08:23.373 "read": true, 00:08:23.373 "write": true, 00:08:23.373 "unmap": true, 00:08:23.373 "flush": true, 00:08:23.373 "reset": true, 00:08:23.373 "nvme_admin": false, 00:08:23.373 "nvme_io": false, 00:08:23.373 "nvme_io_md": false, 00:08:23.373 "write_zeroes": true, 00:08:23.373 "zcopy": true, 00:08:23.373 "get_zone_info": false, 00:08:23.373 "zone_management": false, 00:08:23.373 "zone_append": false, 00:08:23.373 "compare": false, 00:08:23.373 "compare_and_write": false, 00:08:23.373 "abort": true, 00:08:23.373 "seek_hole": false, 00:08:23.373 "seek_data": false, 00:08:23.373 "copy": true, 00:08:23.373 "nvme_iov_md": false 00:08:23.373 }, 00:08:23.373 "memory_domains": [ 00:08:23.373 { 00:08:23.373 "dma_device_id": "system", 00:08:23.373 "dma_device_type": 1 00:08:23.373 }, 00:08:23.373 { 00:08:23.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.373 "dma_device_type": 2 00:08:23.373 } 00:08:23.373 ], 00:08:23.373 "driver_specific": {} 00:08:23.373 } 00:08:23.373 ] 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.373 "name": "Existed_Raid", 00:08:23.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.373 "strip_size_kb": 64, 00:08:23.373 "state": "configuring", 00:08:23.373 "raid_level": "concat", 00:08:23.373 "superblock": false, 00:08:23.373 "num_base_bdevs": 2, 00:08:23.373 "num_base_bdevs_discovered": 1, 00:08:23.373 "num_base_bdevs_operational": 2, 00:08:23.373 "base_bdevs_list": [ 00:08:23.373 { 00:08:23.373 "name": "BaseBdev1", 00:08:23.373 "uuid": "db7b16ff-f5b0-4ff1-a757-5cac24a7a7b2", 00:08:23.373 "is_configured": true, 00:08:23.373 "data_offset": 0, 00:08:23.373 "data_size": 65536 00:08:23.373 }, 00:08:23.373 { 00:08:23.373 "name": "BaseBdev2", 00:08:23.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.373 "is_configured": false, 00:08:23.373 "data_offset": 0, 00:08:23.373 "data_size": 0 00:08:23.373 } 00:08:23.373 ] 00:08:23.373 }' 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.373 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.941 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.941 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.941 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.941 [2024-11-20 14:25:24.790779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.942 [2024-11-20 14:25:24.790874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.942 [2024-11-20 14:25:24.798859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.942 [2024-11-20 14:25:24.801474] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.942 [2024-11-20 14:25:24.801555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.942 "name": "Existed_Raid", 00:08:23.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.942 "strip_size_kb": 64, 00:08:23.942 "state": "configuring", 00:08:23.942 "raid_level": "concat", 00:08:23.942 "superblock": false, 00:08:23.942 "num_base_bdevs": 2, 00:08:23.942 "num_base_bdevs_discovered": 1, 00:08:23.942 "num_base_bdevs_operational": 2, 00:08:23.942 "base_bdevs_list": [ 00:08:23.942 { 00:08:23.942 "name": "BaseBdev1", 00:08:23.942 "uuid": "db7b16ff-f5b0-4ff1-a757-5cac24a7a7b2", 00:08:23.942 "is_configured": true, 00:08:23.942 "data_offset": 0, 00:08:23.942 "data_size": 65536 00:08:23.942 }, 00:08:23.942 { 00:08:23.942 "name": "BaseBdev2", 00:08:23.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.942 "is_configured": false, 00:08:23.942 "data_offset": 0, 00:08:23.942 "data_size": 0 00:08:23.942 } 00:08:23.942 ] 00:08:23.942 }' 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.942 14:25:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.510 [2024-11-20 14:25:25.350005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.510 [2024-11-20 14:25:25.350357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:24.510 [2024-11-20 14:25:25.350413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:24.510 [2024-11-20 14:25:25.350915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:24.510 [2024-11-20 14:25:25.351178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:24.510 [2024-11-20 14:25:25.351201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:24.510 [2024-11-20 14:25:25.351554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.510 BaseBdev2 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.510 [ 00:08:24.510 { 00:08:24.510 "name": "BaseBdev2", 00:08:24.510 "aliases": [ 00:08:24.510 "feef39e3-a70c-492b-a6e8-6d582e481fb7" 00:08:24.510 ], 00:08:24.510 "product_name": "Malloc disk", 00:08:24.510 "block_size": 512, 00:08:24.510 "num_blocks": 65536, 00:08:24.510 "uuid": "feef39e3-a70c-492b-a6e8-6d582e481fb7", 00:08:24.510 "assigned_rate_limits": { 00:08:24.510 "rw_ios_per_sec": 0, 00:08:24.510 "rw_mbytes_per_sec": 0, 00:08:24.510 "r_mbytes_per_sec": 0, 00:08:24.510 "w_mbytes_per_sec": 0 00:08:24.510 }, 00:08:24.510 "claimed": true, 00:08:24.510 "claim_type": "exclusive_write", 00:08:24.510 "zoned": false, 00:08:24.510 "supported_io_types": { 00:08:24.510 "read": true, 00:08:24.510 "write": true, 00:08:24.510 "unmap": true, 00:08:24.510 "flush": true, 00:08:24.510 "reset": true, 00:08:24.510 "nvme_admin": false, 00:08:24.510 "nvme_io": false, 00:08:24.510 "nvme_io_md": false, 00:08:24.510 "write_zeroes": true, 00:08:24.510 "zcopy": true, 00:08:24.510 "get_zone_info": false, 00:08:24.510 "zone_management": false, 00:08:24.510 "zone_append": false, 00:08:24.510 "compare": false, 00:08:24.510 "compare_and_write": false, 00:08:24.510 "abort": true, 00:08:24.510 "seek_hole": false, 00:08:24.510 "seek_data": false, 00:08:24.510 "copy": true, 00:08:24.510 "nvme_iov_md": false 00:08:24.510 }, 00:08:24.510 "memory_domains": [ 00:08:24.510 { 00:08:24.510 "dma_device_id": "system", 00:08:24.510 "dma_device_type": 1 00:08:24.510 }, 00:08:24.510 { 00:08:24.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.510 "dma_device_type": 2 00:08:24.510 } 00:08:24.510 ], 00:08:24.510 "driver_specific": {} 00:08:24.510 } 00:08:24.510 ] 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.510 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.511 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.511 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.511 "name": "Existed_Raid", 00:08:24.511 "uuid": "14d37508-4c07-49b1-9f08-ab843ab22f24", 00:08:24.511 "strip_size_kb": 64, 00:08:24.511 "state": "online", 00:08:24.511 "raid_level": "concat", 00:08:24.511 "superblock": false, 00:08:24.511 "num_base_bdevs": 2, 00:08:24.511 "num_base_bdevs_discovered": 2, 00:08:24.511 "num_base_bdevs_operational": 2, 00:08:24.511 "base_bdevs_list": [ 00:08:24.511 { 00:08:24.511 "name": "BaseBdev1", 00:08:24.511 "uuid": "db7b16ff-f5b0-4ff1-a757-5cac24a7a7b2", 00:08:24.511 "is_configured": true, 00:08:24.511 "data_offset": 0, 00:08:24.511 "data_size": 65536 00:08:24.511 }, 00:08:24.511 { 00:08:24.511 "name": "BaseBdev2", 00:08:24.511 "uuid": "feef39e3-a70c-492b-a6e8-6d582e481fb7", 00:08:24.511 "is_configured": true, 00:08:24.511 "data_offset": 0, 00:08:24.511 "data_size": 65536 00:08:24.511 } 00:08:24.511 ] 00:08:24.511 }' 00:08:24.511 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.511 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.078 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.079 [2024-11-20 14:25:25.914577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.079 "name": "Existed_Raid", 00:08:25.079 "aliases": [ 00:08:25.079 "14d37508-4c07-49b1-9f08-ab843ab22f24" 00:08:25.079 ], 00:08:25.079 "product_name": "Raid Volume", 00:08:25.079 "block_size": 512, 00:08:25.079 "num_blocks": 131072, 00:08:25.079 "uuid": "14d37508-4c07-49b1-9f08-ab843ab22f24", 00:08:25.079 "assigned_rate_limits": { 00:08:25.079 "rw_ios_per_sec": 0, 00:08:25.079 "rw_mbytes_per_sec": 0, 00:08:25.079 "r_mbytes_per_sec": 0, 00:08:25.079 "w_mbytes_per_sec": 0 00:08:25.079 }, 00:08:25.079 "claimed": false, 00:08:25.079 "zoned": false, 00:08:25.079 "supported_io_types": { 00:08:25.079 "read": true, 00:08:25.079 "write": true, 00:08:25.079 "unmap": true, 00:08:25.079 "flush": true, 00:08:25.079 "reset": true, 00:08:25.079 "nvme_admin": false, 00:08:25.079 "nvme_io": false, 00:08:25.079 "nvme_io_md": false, 00:08:25.079 "write_zeroes": true, 00:08:25.079 "zcopy": false, 00:08:25.079 "get_zone_info": false, 00:08:25.079 "zone_management": false, 00:08:25.079 "zone_append": false, 00:08:25.079 "compare": false, 00:08:25.079 "compare_and_write": false, 00:08:25.079 "abort": false, 00:08:25.079 "seek_hole": false, 00:08:25.079 "seek_data": false, 00:08:25.079 "copy": false, 00:08:25.079 "nvme_iov_md": false 00:08:25.079 }, 00:08:25.079 "memory_domains": [ 00:08:25.079 { 00:08:25.079 "dma_device_id": "system", 00:08:25.079 "dma_device_type": 1 00:08:25.079 }, 00:08:25.079 { 00:08:25.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.079 "dma_device_type": 2 00:08:25.079 }, 00:08:25.079 { 00:08:25.079 "dma_device_id": "system", 00:08:25.079 "dma_device_type": 1 00:08:25.079 }, 00:08:25.079 { 00:08:25.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.079 "dma_device_type": 2 00:08:25.079 } 00:08:25.079 ], 00:08:25.079 "driver_specific": { 00:08:25.079 "raid": { 00:08:25.079 "uuid": "14d37508-4c07-49b1-9f08-ab843ab22f24", 00:08:25.079 "strip_size_kb": 64, 00:08:25.079 "state": "online", 00:08:25.079 "raid_level": "concat", 00:08:25.079 "superblock": false, 00:08:25.079 "num_base_bdevs": 2, 00:08:25.079 "num_base_bdevs_discovered": 2, 00:08:25.079 "num_base_bdevs_operational": 2, 00:08:25.079 "base_bdevs_list": [ 00:08:25.079 { 00:08:25.079 "name": "BaseBdev1", 00:08:25.079 "uuid": "db7b16ff-f5b0-4ff1-a757-5cac24a7a7b2", 00:08:25.079 "is_configured": true, 00:08:25.079 "data_offset": 0, 00:08:25.079 "data_size": 65536 00:08:25.079 }, 00:08:25.079 { 00:08:25.079 "name": "BaseBdev2", 00:08:25.079 "uuid": "feef39e3-a70c-492b-a6e8-6d582e481fb7", 00:08:25.079 "is_configured": true, 00:08:25.079 "data_offset": 0, 00:08:25.079 "data_size": 65536 00:08:25.079 } 00:08:25.079 ] 00:08:25.079 } 00:08:25.079 } 00:08:25.079 }' 00:08:25.079 14:25:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:25.079 BaseBdev2' 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.079 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.338 [2024-11-20 14:25:26.214387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.338 [2024-11-20 14:25:26.214462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.338 [2024-11-20 14:25:26.214536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.338 "name": "Existed_Raid", 00:08:25.338 "uuid": "14d37508-4c07-49b1-9f08-ab843ab22f24", 00:08:25.338 "strip_size_kb": 64, 00:08:25.338 "state": "offline", 00:08:25.338 "raid_level": "concat", 00:08:25.338 "superblock": false, 00:08:25.338 "num_base_bdevs": 2, 00:08:25.338 "num_base_bdevs_discovered": 1, 00:08:25.338 "num_base_bdevs_operational": 1, 00:08:25.338 "base_bdevs_list": [ 00:08:25.338 { 00:08:25.338 "name": null, 00:08:25.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.338 "is_configured": false, 00:08:25.338 "data_offset": 0, 00:08:25.338 "data_size": 65536 00:08:25.338 }, 00:08:25.338 { 00:08:25.338 "name": "BaseBdev2", 00:08:25.338 "uuid": "feef39e3-a70c-492b-a6e8-6d582e481fb7", 00:08:25.338 "is_configured": true, 00:08:25.338 "data_offset": 0, 00:08:25.338 "data_size": 65536 00:08:25.338 } 00:08:25.338 ] 00:08:25.338 }' 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.338 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.906 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.906 [2024-11-20 14:25:26.908161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.906 [2024-11-20 14:25:26.908267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:26.165 14:25:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.165 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.165 14:25:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61723 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61723 ']' 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61723 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61723 00:08:26.165 killing process with pid 61723 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61723' 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61723 00:08:26.165 [2024-11-20 14:25:27.077058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:26.165 14:25:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61723 00:08:26.165 [2024-11-20 14:25:27.092209] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:27.542 ************************************ 00:08:27.542 END TEST raid_state_function_test 00:08:27.542 ************************************ 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:27.542 00:08:27.542 real 0m5.658s 00:08:27.542 user 0m8.535s 00:08:27.542 sys 0m0.806s 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.542 14:25:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:27.542 14:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:27.542 14:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.542 14:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:27.542 ************************************ 00:08:27.542 START TEST raid_state_function_test_sb 00:08:27.542 ************************************ 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:27.542 Process raid pid: 61987 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61987 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61987' 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61987 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61987 ']' 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.542 14:25:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.542 [2024-11-20 14:25:28.350594] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:27.542 [2024-11-20 14:25:28.351128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.542 [2024-11-20 14:25:28.539146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.801 [2024-11-20 14:25:28.680036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.060 [2024-11-20 14:25:28.895745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.060 [2024-11-20 14:25:28.896030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.630 [2024-11-20 14:25:29.432993] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:28.630 [2024-11-20 14:25:29.433345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:28.630 [2024-11-20 14:25:29.433375] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.630 [2024-11-20 14:25:29.433394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.630 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.631 "name": "Existed_Raid", 00:08:28.631 "uuid": "8b96490e-b1fa-42d1-99dc-b4fa388b0e3a", 00:08:28.631 "strip_size_kb": 64, 00:08:28.631 "state": "configuring", 00:08:28.631 "raid_level": "concat", 00:08:28.631 "superblock": true, 00:08:28.631 "num_base_bdevs": 2, 00:08:28.631 "num_base_bdevs_discovered": 0, 00:08:28.631 "num_base_bdevs_operational": 2, 00:08:28.631 "base_bdevs_list": [ 00:08:28.631 { 00:08:28.631 "name": "BaseBdev1", 00:08:28.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.631 "is_configured": false, 00:08:28.631 "data_offset": 0, 00:08:28.631 "data_size": 0 00:08:28.631 }, 00:08:28.631 { 00:08:28.631 "name": "BaseBdev2", 00:08:28.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.631 "is_configured": false, 00:08:28.631 "data_offset": 0, 00:08:28.631 "data_size": 0 00:08:28.631 } 00:08:28.631 ] 00:08:28.631 }' 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.631 14:25:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.202 [2024-11-20 14:25:30.025049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.202 [2024-11-20 14:25:30.025096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.202 [2024-11-20 14:25:30.033041] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.202 [2024-11-20 14:25:30.033243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.202 [2024-11-20 14:25:30.033374] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.202 [2024-11-20 14:25:30.033440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.202 BaseBdev1 00:08:29.202 [2024-11-20 14:25:30.080692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.202 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.202 [ 00:08:29.202 { 00:08:29.202 "name": "BaseBdev1", 00:08:29.202 "aliases": [ 00:08:29.202 "dfb4307b-2fa7-4167-bbc8-6c3e0fbbb49c" 00:08:29.202 ], 00:08:29.202 "product_name": "Malloc disk", 00:08:29.202 "block_size": 512, 00:08:29.202 "num_blocks": 65536, 00:08:29.203 "uuid": "dfb4307b-2fa7-4167-bbc8-6c3e0fbbb49c", 00:08:29.203 "assigned_rate_limits": { 00:08:29.203 "rw_ios_per_sec": 0, 00:08:29.203 "rw_mbytes_per_sec": 0, 00:08:29.203 "r_mbytes_per_sec": 0, 00:08:29.203 "w_mbytes_per_sec": 0 00:08:29.203 }, 00:08:29.203 "claimed": true, 00:08:29.203 "claim_type": "exclusive_write", 00:08:29.203 "zoned": false, 00:08:29.203 "supported_io_types": { 00:08:29.203 "read": true, 00:08:29.203 "write": true, 00:08:29.203 "unmap": true, 00:08:29.203 "flush": true, 00:08:29.203 "reset": true, 00:08:29.203 "nvme_admin": false, 00:08:29.203 "nvme_io": false, 00:08:29.203 "nvme_io_md": false, 00:08:29.203 "write_zeroes": true, 00:08:29.203 "zcopy": true, 00:08:29.203 "get_zone_info": false, 00:08:29.203 "zone_management": false, 00:08:29.203 "zone_append": false, 00:08:29.203 "compare": false, 00:08:29.203 "compare_and_write": false, 00:08:29.203 "abort": true, 00:08:29.203 "seek_hole": false, 00:08:29.203 "seek_data": false, 00:08:29.203 "copy": true, 00:08:29.203 "nvme_iov_md": false 00:08:29.203 }, 00:08:29.203 "memory_domains": [ 00:08:29.203 { 00:08:29.203 "dma_device_id": "system", 00:08:29.203 "dma_device_type": 1 00:08:29.203 }, 00:08:29.203 { 00:08:29.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.203 "dma_device_type": 2 00:08:29.203 } 00:08:29.203 ], 00:08:29.203 "driver_specific": {} 00:08:29.203 } 00:08:29.203 ] 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.203 "name": "Existed_Raid", 00:08:29.203 "uuid": "dd2b0353-529e-49ee-997b-8cabe4e833c2", 00:08:29.203 "strip_size_kb": 64, 00:08:29.203 "state": "configuring", 00:08:29.203 "raid_level": "concat", 00:08:29.203 "superblock": true, 00:08:29.203 "num_base_bdevs": 2, 00:08:29.203 "num_base_bdevs_discovered": 1, 00:08:29.203 "num_base_bdevs_operational": 2, 00:08:29.203 "base_bdevs_list": [ 00:08:29.203 { 00:08:29.203 "name": "BaseBdev1", 00:08:29.203 "uuid": "dfb4307b-2fa7-4167-bbc8-6c3e0fbbb49c", 00:08:29.203 "is_configured": true, 00:08:29.203 "data_offset": 2048, 00:08:29.203 "data_size": 63488 00:08:29.203 }, 00:08:29.203 { 00:08:29.203 "name": "BaseBdev2", 00:08:29.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.203 "is_configured": false, 00:08:29.203 "data_offset": 0, 00:08:29.203 "data_size": 0 00:08:29.203 } 00:08:29.203 ] 00:08:29.203 }' 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.203 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.771 [2024-11-20 14:25:30.621022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.771 [2024-11-20 14:25:30.621115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.771 [2024-11-20 14:25:30.629020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.771 [2024-11-20 14:25:30.631659] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.771 [2024-11-20 14:25:30.631863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.771 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.772 "name": "Existed_Raid", 00:08:29.772 "uuid": "1a71de18-fa36-4e38-8a80-072a33f373a1", 00:08:29.772 "strip_size_kb": 64, 00:08:29.772 "state": "configuring", 00:08:29.772 "raid_level": "concat", 00:08:29.772 "superblock": true, 00:08:29.772 "num_base_bdevs": 2, 00:08:29.772 "num_base_bdevs_discovered": 1, 00:08:29.772 "num_base_bdevs_operational": 2, 00:08:29.772 "base_bdevs_list": [ 00:08:29.772 { 00:08:29.772 "name": "BaseBdev1", 00:08:29.772 "uuid": "dfb4307b-2fa7-4167-bbc8-6c3e0fbbb49c", 00:08:29.772 "is_configured": true, 00:08:29.772 "data_offset": 2048, 00:08:29.772 "data_size": 63488 00:08:29.772 }, 00:08:29.772 { 00:08:29.772 "name": "BaseBdev2", 00:08:29.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.772 "is_configured": false, 00:08:29.772 "data_offset": 0, 00:08:29.772 "data_size": 0 00:08:29.772 } 00:08:29.772 ] 00:08:29.772 }' 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.772 14:25:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.340 [2024-11-20 14:25:31.155953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.340 [2024-11-20 14:25:31.156306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:30.340 [2024-11-20 14:25:31.156327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:30.340 [2024-11-20 14:25:31.156692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:30.340 BaseBdev2 00:08:30.340 [2024-11-20 14:25:31.156898] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:30.340 [2024-11-20 14:25:31.156922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:30.340 [2024-11-20 14:25:31.157093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.340 [ 00:08:30.340 { 00:08:30.340 "name": "BaseBdev2", 00:08:30.340 "aliases": [ 00:08:30.340 "a6012413-bedd-46df-99d5-67355be1cdc6" 00:08:30.340 ], 00:08:30.340 "product_name": "Malloc disk", 00:08:30.340 "block_size": 512, 00:08:30.340 "num_blocks": 65536, 00:08:30.340 "uuid": "a6012413-bedd-46df-99d5-67355be1cdc6", 00:08:30.340 "assigned_rate_limits": { 00:08:30.340 "rw_ios_per_sec": 0, 00:08:30.340 "rw_mbytes_per_sec": 0, 00:08:30.340 "r_mbytes_per_sec": 0, 00:08:30.340 "w_mbytes_per_sec": 0 00:08:30.340 }, 00:08:30.340 "claimed": true, 00:08:30.340 "claim_type": "exclusive_write", 00:08:30.340 "zoned": false, 00:08:30.340 "supported_io_types": { 00:08:30.340 "read": true, 00:08:30.340 "write": true, 00:08:30.340 "unmap": true, 00:08:30.340 "flush": true, 00:08:30.340 "reset": true, 00:08:30.340 "nvme_admin": false, 00:08:30.340 "nvme_io": false, 00:08:30.340 "nvme_io_md": false, 00:08:30.340 "write_zeroes": true, 00:08:30.340 "zcopy": true, 00:08:30.340 "get_zone_info": false, 00:08:30.340 "zone_management": false, 00:08:30.340 "zone_append": false, 00:08:30.340 "compare": false, 00:08:30.340 "compare_and_write": false, 00:08:30.340 "abort": true, 00:08:30.340 "seek_hole": false, 00:08:30.340 "seek_data": false, 00:08:30.340 "copy": true, 00:08:30.340 "nvme_iov_md": false 00:08:30.340 }, 00:08:30.340 "memory_domains": [ 00:08:30.340 { 00:08:30.340 "dma_device_id": "system", 00:08:30.340 "dma_device_type": 1 00:08:30.340 }, 00:08:30.340 { 00:08:30.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.340 "dma_device_type": 2 00:08:30.340 } 00:08:30.340 ], 00:08:30.340 "driver_specific": {} 00:08:30.340 } 00:08:30.340 ] 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.340 "name": "Existed_Raid", 00:08:30.340 "uuid": "1a71de18-fa36-4e38-8a80-072a33f373a1", 00:08:30.340 "strip_size_kb": 64, 00:08:30.340 "state": "online", 00:08:30.340 "raid_level": "concat", 00:08:30.340 "superblock": true, 00:08:30.340 "num_base_bdevs": 2, 00:08:30.340 "num_base_bdevs_discovered": 2, 00:08:30.340 "num_base_bdevs_operational": 2, 00:08:30.340 "base_bdevs_list": [ 00:08:30.340 { 00:08:30.340 "name": "BaseBdev1", 00:08:30.340 "uuid": "dfb4307b-2fa7-4167-bbc8-6c3e0fbbb49c", 00:08:30.340 "is_configured": true, 00:08:30.340 "data_offset": 2048, 00:08:30.340 "data_size": 63488 00:08:30.340 }, 00:08:30.340 { 00:08:30.340 "name": "BaseBdev2", 00:08:30.340 "uuid": "a6012413-bedd-46df-99d5-67355be1cdc6", 00:08:30.340 "is_configured": true, 00:08:30.340 "data_offset": 2048, 00:08:30.340 "data_size": 63488 00:08:30.340 } 00:08:30.340 ] 00:08:30.340 }' 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.340 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.908 [2024-11-20 14:25:31.696565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.908 "name": "Existed_Raid", 00:08:30.908 "aliases": [ 00:08:30.908 "1a71de18-fa36-4e38-8a80-072a33f373a1" 00:08:30.908 ], 00:08:30.908 "product_name": "Raid Volume", 00:08:30.908 "block_size": 512, 00:08:30.908 "num_blocks": 126976, 00:08:30.908 "uuid": "1a71de18-fa36-4e38-8a80-072a33f373a1", 00:08:30.908 "assigned_rate_limits": { 00:08:30.908 "rw_ios_per_sec": 0, 00:08:30.908 "rw_mbytes_per_sec": 0, 00:08:30.908 "r_mbytes_per_sec": 0, 00:08:30.908 "w_mbytes_per_sec": 0 00:08:30.908 }, 00:08:30.908 "claimed": false, 00:08:30.908 "zoned": false, 00:08:30.908 "supported_io_types": { 00:08:30.908 "read": true, 00:08:30.908 "write": true, 00:08:30.908 "unmap": true, 00:08:30.908 "flush": true, 00:08:30.908 "reset": true, 00:08:30.908 "nvme_admin": false, 00:08:30.908 "nvme_io": false, 00:08:30.908 "nvme_io_md": false, 00:08:30.908 "write_zeroes": true, 00:08:30.908 "zcopy": false, 00:08:30.908 "get_zone_info": false, 00:08:30.908 "zone_management": false, 00:08:30.908 "zone_append": false, 00:08:30.908 "compare": false, 00:08:30.908 "compare_and_write": false, 00:08:30.908 "abort": false, 00:08:30.908 "seek_hole": false, 00:08:30.908 "seek_data": false, 00:08:30.908 "copy": false, 00:08:30.908 "nvme_iov_md": false 00:08:30.908 }, 00:08:30.908 "memory_domains": [ 00:08:30.908 { 00:08:30.908 "dma_device_id": "system", 00:08:30.908 "dma_device_type": 1 00:08:30.908 }, 00:08:30.908 { 00:08:30.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.908 "dma_device_type": 2 00:08:30.908 }, 00:08:30.908 { 00:08:30.908 "dma_device_id": "system", 00:08:30.908 "dma_device_type": 1 00:08:30.908 }, 00:08:30.908 { 00:08:30.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.908 "dma_device_type": 2 00:08:30.908 } 00:08:30.908 ], 00:08:30.908 "driver_specific": { 00:08:30.908 "raid": { 00:08:30.908 "uuid": "1a71de18-fa36-4e38-8a80-072a33f373a1", 00:08:30.908 "strip_size_kb": 64, 00:08:30.908 "state": "online", 00:08:30.908 "raid_level": "concat", 00:08:30.908 "superblock": true, 00:08:30.908 "num_base_bdevs": 2, 00:08:30.908 "num_base_bdevs_discovered": 2, 00:08:30.908 "num_base_bdevs_operational": 2, 00:08:30.908 "base_bdevs_list": [ 00:08:30.908 { 00:08:30.908 "name": "BaseBdev1", 00:08:30.908 "uuid": "dfb4307b-2fa7-4167-bbc8-6c3e0fbbb49c", 00:08:30.908 "is_configured": true, 00:08:30.908 "data_offset": 2048, 00:08:30.908 "data_size": 63488 00:08:30.908 }, 00:08:30.908 { 00:08:30.908 "name": "BaseBdev2", 00:08:30.908 "uuid": "a6012413-bedd-46df-99d5-67355be1cdc6", 00:08:30.908 "is_configured": true, 00:08:30.908 "data_offset": 2048, 00:08:30.908 "data_size": 63488 00:08:30.908 } 00:08:30.908 ] 00:08:30.908 } 00:08:30.908 } 00:08:30.908 }' 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:30.908 BaseBdev2' 00:08:30.908 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.909 14:25:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.909 [2024-11-20 14:25:31.956453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:30.909 [2024-11-20 14:25:31.956824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.909 [2024-11-20 14:25:31.956922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.167 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.167 "name": "Existed_Raid", 00:08:31.167 "uuid": "1a71de18-fa36-4e38-8a80-072a33f373a1", 00:08:31.167 "strip_size_kb": 64, 00:08:31.168 "state": "offline", 00:08:31.168 "raid_level": "concat", 00:08:31.168 "superblock": true, 00:08:31.168 "num_base_bdevs": 2, 00:08:31.168 "num_base_bdevs_discovered": 1, 00:08:31.168 "num_base_bdevs_operational": 1, 00:08:31.168 "base_bdevs_list": [ 00:08:31.168 { 00:08:31.168 "name": null, 00:08:31.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.168 "is_configured": false, 00:08:31.168 "data_offset": 0, 00:08:31.168 "data_size": 63488 00:08:31.168 }, 00:08:31.168 { 00:08:31.168 "name": "BaseBdev2", 00:08:31.168 "uuid": "a6012413-bedd-46df-99d5-67355be1cdc6", 00:08:31.168 "is_configured": true, 00:08:31.168 "data_offset": 2048, 00:08:31.168 "data_size": 63488 00:08:31.168 } 00:08:31.168 ] 00:08:31.168 }' 00:08:31.168 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.168 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.734 [2024-11-20 14:25:32.652504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.734 [2024-11-20 14:25:32.652884] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.734 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61987 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61987 ']' 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61987 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61987 00:08:31.993 killing process with pid 61987 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61987' 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61987 00:08:31.993 [2024-11-20 14:25:32.832009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.993 14:25:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61987 00:08:31.993 [2024-11-20 14:25:32.846986] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.929 ************************************ 00:08:32.929 END TEST raid_state_function_test_sb 00:08:32.929 ************************************ 00:08:32.929 14:25:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:32.929 00:08:32.929 real 0m5.697s 00:08:32.929 user 0m8.566s 00:08:32.929 sys 0m0.864s 00:08:32.929 14:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.929 14:25:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.929 14:25:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:32.929 14:25:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:32.929 14:25:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.929 14:25:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.929 ************************************ 00:08:32.929 START TEST raid_superblock_test 00:08:32.929 ************************************ 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:32.929 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62245 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62245 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62245 ']' 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.188 14:25:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.188 [2024-11-20 14:25:34.098021] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:33.188 [2024-11-20 14:25:34.098295] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62245 ] 00:08:33.446 [2024-11-20 14:25:34.279690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.446 [2024-11-20 14:25:34.415130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.704 [2024-11-20 14:25:34.621732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.704 [2024-11-20 14:25:34.621796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.271 malloc1 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.271 [2024-11-20 14:25:35.118710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:34.271 [2024-11-20 14:25:35.119013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.271 [2024-11-20 14:25:35.119055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:34.271 [2024-11-20 14:25:35.119073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.271 [2024-11-20 14:25:35.121961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.271 [2024-11-20 14:25:35.122008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:34.271 pt1 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.271 malloc2 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.271 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.271 [2024-11-20 14:25:35.167487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.271 [2024-11-20 14:25:35.167746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.271 [2024-11-20 14:25:35.167897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:34.271 [2024-11-20 14:25:35.168028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.271 [2024-11-20 14:25:35.170972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.271 pt2 00:08:34.272 [2024-11-20 14:25:35.171153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.272 [2024-11-20 14:25:35.175569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:34.272 [2024-11-20 14:25:35.178113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.272 [2024-11-20 14:25:35.178331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:34.272 [2024-11-20 14:25:35.178351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:34.272 [2024-11-20 14:25:35.178684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:34.272 [2024-11-20 14:25:35.178881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:34.272 [2024-11-20 14:25:35.178901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:34.272 [2024-11-20 14:25:35.179097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.272 "name": "raid_bdev1", 00:08:34.272 "uuid": "2c54f729-98cf-4799-8f1f-7bfdae0237d4", 00:08:34.272 "strip_size_kb": 64, 00:08:34.272 "state": "online", 00:08:34.272 "raid_level": "concat", 00:08:34.272 "superblock": true, 00:08:34.272 "num_base_bdevs": 2, 00:08:34.272 "num_base_bdevs_discovered": 2, 00:08:34.272 "num_base_bdevs_operational": 2, 00:08:34.272 "base_bdevs_list": [ 00:08:34.272 { 00:08:34.272 "name": "pt1", 00:08:34.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.272 "is_configured": true, 00:08:34.272 "data_offset": 2048, 00:08:34.272 "data_size": 63488 00:08:34.272 }, 00:08:34.272 { 00:08:34.272 "name": "pt2", 00:08:34.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.272 "is_configured": true, 00:08:34.272 "data_offset": 2048, 00:08:34.272 "data_size": 63488 00:08:34.272 } 00:08:34.272 ] 00:08:34.272 }' 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.272 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.838 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.839 [2024-11-20 14:25:35.736081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.839 "name": "raid_bdev1", 00:08:34.839 "aliases": [ 00:08:34.839 "2c54f729-98cf-4799-8f1f-7bfdae0237d4" 00:08:34.839 ], 00:08:34.839 "product_name": "Raid Volume", 00:08:34.839 "block_size": 512, 00:08:34.839 "num_blocks": 126976, 00:08:34.839 "uuid": "2c54f729-98cf-4799-8f1f-7bfdae0237d4", 00:08:34.839 "assigned_rate_limits": { 00:08:34.839 "rw_ios_per_sec": 0, 00:08:34.839 "rw_mbytes_per_sec": 0, 00:08:34.839 "r_mbytes_per_sec": 0, 00:08:34.839 "w_mbytes_per_sec": 0 00:08:34.839 }, 00:08:34.839 "claimed": false, 00:08:34.839 "zoned": false, 00:08:34.839 "supported_io_types": { 00:08:34.839 "read": true, 00:08:34.839 "write": true, 00:08:34.839 "unmap": true, 00:08:34.839 "flush": true, 00:08:34.839 "reset": true, 00:08:34.839 "nvme_admin": false, 00:08:34.839 "nvme_io": false, 00:08:34.839 "nvme_io_md": false, 00:08:34.839 "write_zeroes": true, 00:08:34.839 "zcopy": false, 00:08:34.839 "get_zone_info": false, 00:08:34.839 "zone_management": false, 00:08:34.839 "zone_append": false, 00:08:34.839 "compare": false, 00:08:34.839 "compare_and_write": false, 00:08:34.839 "abort": false, 00:08:34.839 "seek_hole": false, 00:08:34.839 "seek_data": false, 00:08:34.839 "copy": false, 00:08:34.839 "nvme_iov_md": false 00:08:34.839 }, 00:08:34.839 "memory_domains": [ 00:08:34.839 { 00:08:34.839 "dma_device_id": "system", 00:08:34.839 "dma_device_type": 1 00:08:34.839 }, 00:08:34.839 { 00:08:34.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.839 "dma_device_type": 2 00:08:34.839 }, 00:08:34.839 { 00:08:34.839 "dma_device_id": "system", 00:08:34.839 "dma_device_type": 1 00:08:34.839 }, 00:08:34.839 { 00:08:34.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.839 "dma_device_type": 2 00:08:34.839 } 00:08:34.839 ], 00:08:34.839 "driver_specific": { 00:08:34.839 "raid": { 00:08:34.839 "uuid": "2c54f729-98cf-4799-8f1f-7bfdae0237d4", 00:08:34.839 "strip_size_kb": 64, 00:08:34.839 "state": "online", 00:08:34.839 "raid_level": "concat", 00:08:34.839 "superblock": true, 00:08:34.839 "num_base_bdevs": 2, 00:08:34.839 "num_base_bdevs_discovered": 2, 00:08:34.839 "num_base_bdevs_operational": 2, 00:08:34.839 "base_bdevs_list": [ 00:08:34.839 { 00:08:34.839 "name": "pt1", 00:08:34.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.839 "is_configured": true, 00:08:34.839 "data_offset": 2048, 00:08:34.839 "data_size": 63488 00:08:34.839 }, 00:08:34.839 { 00:08:34.839 "name": "pt2", 00:08:34.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.839 "is_configured": true, 00:08:34.839 "data_offset": 2048, 00:08:34.839 "data_size": 63488 00:08:34.839 } 00:08:34.839 ] 00:08:34.839 } 00:08:34.839 } 00:08:34.839 }' 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:34.839 pt2' 00:08:34.839 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.098 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.099 14:25:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.099 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.099 14:25:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.099 [2024-11-20 14:25:36.052255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2c54f729-98cf-4799-8f1f-7bfdae0237d4 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2c54f729-98cf-4799-8f1f-7bfdae0237d4 ']' 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.099 [2024-11-20 14:25:36.103794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.099 [2024-11-20 14:25:36.104066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.099 [2024-11-20 14:25:36.104351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.099 [2024-11-20 14:25:36.104525] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.099 [2024-11-20 14:25:36.104699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.099 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.358 [2024-11-20 14:25:36.243964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:35.358 [2024-11-20 14:25:36.246963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:35.358 [2024-11-20 14:25:36.247078] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:35.358 [2024-11-20 14:25:36.247183] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:35.358 [2024-11-20 14:25:36.247211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.358 [2024-11-20 14:25:36.247228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:35.358 request: 00:08:35.358 { 00:08:35.358 "name": "raid_bdev1", 00:08:35.358 "raid_level": "concat", 00:08:35.358 "base_bdevs": [ 00:08:35.358 "malloc1", 00:08:35.358 "malloc2" 00:08:35.358 ], 00:08:35.358 "strip_size_kb": 64, 00:08:35.358 "superblock": false, 00:08:35.358 "method": "bdev_raid_create", 00:08:35.358 "req_id": 1 00:08:35.358 } 00:08:35.358 Got JSON-RPC error response 00:08:35.358 response: 00:08:35.358 { 00:08:35.358 "code": -17, 00:08:35.358 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:35.358 } 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.358 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.358 [2024-11-20 14:25:36.308097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.358 [2024-11-20 14:25:36.308402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.358 [2024-11-20 14:25:36.308490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:35.358 [2024-11-20 14:25:36.308619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.358 [2024-11-20 14:25:36.312011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.358 [2024-11-20 14:25:36.312182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.358 [2024-11-20 14:25:36.312430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:35.359 [2024-11-20 14:25:36.312657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.359 pt1 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.359 "name": "raid_bdev1", 00:08:35.359 "uuid": "2c54f729-98cf-4799-8f1f-7bfdae0237d4", 00:08:35.359 "strip_size_kb": 64, 00:08:35.359 "state": "configuring", 00:08:35.359 "raid_level": "concat", 00:08:35.359 "superblock": true, 00:08:35.359 "num_base_bdevs": 2, 00:08:35.359 "num_base_bdevs_discovered": 1, 00:08:35.359 "num_base_bdevs_operational": 2, 00:08:35.359 "base_bdevs_list": [ 00:08:35.359 { 00:08:35.359 "name": "pt1", 00:08:35.359 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.359 "is_configured": true, 00:08:35.359 "data_offset": 2048, 00:08:35.359 "data_size": 63488 00:08:35.359 }, 00:08:35.359 { 00:08:35.359 "name": null, 00:08:35.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.359 "is_configured": false, 00:08:35.359 "data_offset": 2048, 00:08:35.359 "data_size": 63488 00:08:35.359 } 00:08:35.359 ] 00:08:35.359 }' 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.359 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.925 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:35.925 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:35.925 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.925 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:35.925 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.925 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.925 [2024-11-20 14:25:36.856759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:35.925 [2024-11-20 14:25:36.857033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.925 [2024-11-20 14:25:36.857110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:35.925 [2024-11-20 14:25:36.857396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.925 [2024-11-20 14:25:36.858063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.925 [2024-11-20 14:25:36.858109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:35.925 [2024-11-20 14:25:36.858224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:35.925 [2024-11-20 14:25:36.858268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.925 [2024-11-20 14:25:36.858431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.925 [2024-11-20 14:25:36.858460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:35.925 [2024-11-20 14:25:36.858809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:35.925 [2024-11-20 14:25:36.858996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.925 [2024-11-20 14:25:36.859011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:35.925 [2024-11-20 14:25:36.859184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.925 pt2 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.926 "name": "raid_bdev1", 00:08:35.926 "uuid": "2c54f729-98cf-4799-8f1f-7bfdae0237d4", 00:08:35.926 "strip_size_kb": 64, 00:08:35.926 "state": "online", 00:08:35.926 "raid_level": "concat", 00:08:35.926 "superblock": true, 00:08:35.926 "num_base_bdevs": 2, 00:08:35.926 "num_base_bdevs_discovered": 2, 00:08:35.926 "num_base_bdevs_operational": 2, 00:08:35.926 "base_bdevs_list": [ 00:08:35.926 { 00:08:35.926 "name": "pt1", 00:08:35.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.926 "is_configured": true, 00:08:35.926 "data_offset": 2048, 00:08:35.926 "data_size": 63488 00:08:35.926 }, 00:08:35.926 { 00:08:35.926 "name": "pt2", 00:08:35.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.926 "is_configured": true, 00:08:35.926 "data_offset": 2048, 00:08:35.926 "data_size": 63488 00:08:35.926 } 00:08:35.926 ] 00:08:35.926 }' 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.926 14:25:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.493 [2024-11-20 14:25:37.325193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:36.493 "name": "raid_bdev1", 00:08:36.493 "aliases": [ 00:08:36.493 "2c54f729-98cf-4799-8f1f-7bfdae0237d4" 00:08:36.493 ], 00:08:36.493 "product_name": "Raid Volume", 00:08:36.493 "block_size": 512, 00:08:36.493 "num_blocks": 126976, 00:08:36.493 "uuid": "2c54f729-98cf-4799-8f1f-7bfdae0237d4", 00:08:36.493 "assigned_rate_limits": { 00:08:36.493 "rw_ios_per_sec": 0, 00:08:36.493 "rw_mbytes_per_sec": 0, 00:08:36.493 "r_mbytes_per_sec": 0, 00:08:36.493 "w_mbytes_per_sec": 0 00:08:36.493 }, 00:08:36.493 "claimed": false, 00:08:36.493 "zoned": false, 00:08:36.493 "supported_io_types": { 00:08:36.493 "read": true, 00:08:36.493 "write": true, 00:08:36.493 "unmap": true, 00:08:36.493 "flush": true, 00:08:36.493 "reset": true, 00:08:36.493 "nvme_admin": false, 00:08:36.493 "nvme_io": false, 00:08:36.493 "nvme_io_md": false, 00:08:36.493 "write_zeroes": true, 00:08:36.493 "zcopy": false, 00:08:36.493 "get_zone_info": false, 00:08:36.493 "zone_management": false, 00:08:36.493 "zone_append": false, 00:08:36.493 "compare": false, 00:08:36.493 "compare_and_write": false, 00:08:36.493 "abort": false, 00:08:36.493 "seek_hole": false, 00:08:36.493 "seek_data": false, 00:08:36.493 "copy": false, 00:08:36.493 "nvme_iov_md": false 00:08:36.493 }, 00:08:36.493 "memory_domains": [ 00:08:36.493 { 00:08:36.493 "dma_device_id": "system", 00:08:36.493 "dma_device_type": 1 00:08:36.493 }, 00:08:36.493 { 00:08:36.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.493 "dma_device_type": 2 00:08:36.493 }, 00:08:36.493 { 00:08:36.493 "dma_device_id": "system", 00:08:36.493 "dma_device_type": 1 00:08:36.493 }, 00:08:36.493 { 00:08:36.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.493 "dma_device_type": 2 00:08:36.493 } 00:08:36.493 ], 00:08:36.493 "driver_specific": { 00:08:36.493 "raid": { 00:08:36.493 "uuid": "2c54f729-98cf-4799-8f1f-7bfdae0237d4", 00:08:36.493 "strip_size_kb": 64, 00:08:36.493 "state": "online", 00:08:36.493 "raid_level": "concat", 00:08:36.493 "superblock": true, 00:08:36.493 "num_base_bdevs": 2, 00:08:36.493 "num_base_bdevs_discovered": 2, 00:08:36.493 "num_base_bdevs_operational": 2, 00:08:36.493 "base_bdevs_list": [ 00:08:36.493 { 00:08:36.493 "name": "pt1", 00:08:36.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:36.493 "is_configured": true, 00:08:36.493 "data_offset": 2048, 00:08:36.493 "data_size": 63488 00:08:36.493 }, 00:08:36.493 { 00:08:36.493 "name": "pt2", 00:08:36.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:36.493 "is_configured": true, 00:08:36.493 "data_offset": 2048, 00:08:36.493 "data_size": 63488 00:08:36.493 } 00:08:36.493 ] 00:08:36.493 } 00:08:36.493 } 00:08:36.493 }' 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:36.493 pt2' 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.493 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.494 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:36.751 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.751 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:36.751 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:36.751 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:36.751 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.751 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.751 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:36.752 [2024-11-20 14:25:37.605273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2c54f729-98cf-4799-8f1f-7bfdae0237d4 '!=' 2c54f729-98cf-4799-8f1f-7bfdae0237d4 ']' 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62245 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62245 ']' 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62245 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62245 00:08:36.752 killing process with pid 62245 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62245' 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62245 00:08:36.752 [2024-11-20 14:25:37.690396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.752 14:25:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62245 00:08:36.752 [2024-11-20 14:25:37.690725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.752 [2024-11-20 14:25:37.691071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.752 [2024-11-20 14:25:37.691108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:37.010 [2024-11-20 14:25:37.876179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.978 14:25:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:37.978 00:08:37.978 real 0m4.979s 00:08:37.978 user 0m7.278s 00:08:37.978 sys 0m0.772s 00:08:37.978 ************************************ 00:08:37.978 END TEST raid_superblock_test 00:08:37.978 ************************************ 00:08:37.978 14:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.978 14:25:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.978 14:25:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:37.978 14:25:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.978 14:25:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.978 14:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.978 ************************************ 00:08:37.978 START TEST raid_read_error_test 00:08:37.978 ************************************ 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ouTxhymH3a 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62457 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62457 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62457 ']' 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.978 14:25:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.236 [2024-11-20 14:25:39.122051] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:38.236 [2024-11-20 14:25:39.122230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62457 ] 00:08:38.493 [2024-11-20 14:25:39.304798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.493 [2024-11-20 14:25:39.460449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.751 [2024-11-20 14:25:39.696028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.751 [2024-11-20 14:25:39.696117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 BaseBdev1_malloc 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 true 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.319 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 [2024-11-20 14:25:40.192286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.319 [2024-11-20 14:25:40.192373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.319 [2024-11-20 14:25:40.192404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.319 [2024-11-20 14:25:40.192423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.319 [2024-11-20 14:25:40.195360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.319 [2024-11-20 14:25:40.195412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.319 BaseBdev1 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.320 BaseBdev2_malloc 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.320 true 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.320 [2024-11-20 14:25:40.249213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.320 [2024-11-20 14:25:40.249289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.320 [2024-11-20 14:25:40.249316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:39.320 [2024-11-20 14:25:40.249334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.320 [2024-11-20 14:25:40.252167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.320 [2024-11-20 14:25:40.252371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.320 BaseBdev2 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.320 [2024-11-20 14:25:40.257363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.320 [2024-11-20 14:25:40.259936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.320 [2024-11-20 14:25:40.260205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:39.320 [2024-11-20 14:25:40.260231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:39.320 [2024-11-20 14:25:40.260543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:39.320 [2024-11-20 14:25:40.260792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:39.320 [2024-11-20 14:25:40.260816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:39.320 [2024-11-20 14:25:40.261016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.320 "name": "raid_bdev1", 00:08:39.320 "uuid": "9f4c6897-7040-4075-9d5b-3b972f6989d6", 00:08:39.320 "strip_size_kb": 64, 00:08:39.320 "state": "online", 00:08:39.320 "raid_level": "concat", 00:08:39.320 "superblock": true, 00:08:39.320 "num_base_bdevs": 2, 00:08:39.320 "num_base_bdevs_discovered": 2, 00:08:39.320 "num_base_bdevs_operational": 2, 00:08:39.320 "base_bdevs_list": [ 00:08:39.320 { 00:08:39.320 "name": "BaseBdev1", 00:08:39.320 "uuid": "c73de2c9-7e96-5a83-b765-78fd4a4663bd", 00:08:39.320 "is_configured": true, 00:08:39.320 "data_offset": 2048, 00:08:39.320 "data_size": 63488 00:08:39.320 }, 00:08:39.320 { 00:08:39.320 "name": "BaseBdev2", 00:08:39.320 "uuid": "41c79e50-5649-5c8d-ac72-4f8a392588c9", 00:08:39.320 "is_configured": true, 00:08:39.320 "data_offset": 2048, 00:08:39.320 "data_size": 63488 00:08:39.320 } 00:08:39.320 ] 00:08:39.320 }' 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.320 14:25:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.888 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:39.888 14:25:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:39.888 [2024-11-20 14:25:40.902993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.824 "name": "raid_bdev1", 00:08:40.824 "uuid": "9f4c6897-7040-4075-9d5b-3b972f6989d6", 00:08:40.824 "strip_size_kb": 64, 00:08:40.824 "state": "online", 00:08:40.824 "raid_level": "concat", 00:08:40.824 "superblock": true, 00:08:40.824 "num_base_bdevs": 2, 00:08:40.824 "num_base_bdevs_discovered": 2, 00:08:40.824 "num_base_bdevs_operational": 2, 00:08:40.824 "base_bdevs_list": [ 00:08:40.824 { 00:08:40.824 "name": "BaseBdev1", 00:08:40.824 "uuid": "c73de2c9-7e96-5a83-b765-78fd4a4663bd", 00:08:40.824 "is_configured": true, 00:08:40.824 "data_offset": 2048, 00:08:40.824 "data_size": 63488 00:08:40.824 }, 00:08:40.824 { 00:08:40.824 "name": "BaseBdev2", 00:08:40.824 "uuid": "41c79e50-5649-5c8d-ac72-4f8a392588c9", 00:08:40.824 "is_configured": true, 00:08:40.824 "data_offset": 2048, 00:08:40.824 "data_size": 63488 00:08:40.824 } 00:08:40.824 ] 00:08:40.824 }' 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.824 14:25:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.393 [2024-11-20 14:25:42.301536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.393 [2024-11-20 14:25:42.301580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.393 [2024-11-20 14:25:42.304984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.393 [2024-11-20 14:25:42.305199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.393 [2024-11-20 14:25:42.305263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.393 [2024-11-20 14:25:42.305289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:41.393 { 00:08:41.393 "results": [ 00:08:41.393 { 00:08:41.393 "job": "raid_bdev1", 00:08:41.393 "core_mask": "0x1", 00:08:41.393 "workload": "randrw", 00:08:41.393 "percentage": 50, 00:08:41.393 "status": "finished", 00:08:41.393 "queue_depth": 1, 00:08:41.393 "io_size": 131072, 00:08:41.393 "runtime": 1.395801, 00:08:41.393 "iops": 10667.709795307497, 00:08:41.393 "mibps": 1333.4637244134371, 00:08:41.393 "io_failed": 1, 00:08:41.393 "io_timeout": 0, 00:08:41.393 "avg_latency_us": 130.83919927228771, 00:08:41.393 "min_latency_us": 43.52, 00:08:41.393 "max_latency_us": 1861.8181818181818 00:08:41.393 } 00:08:41.393 ], 00:08:41.393 "core_count": 1 00:08:41.393 } 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62457 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62457 ']' 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62457 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62457 00:08:41.393 killing process with pid 62457 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62457' 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62457 00:08:41.393 [2024-11-20 14:25:42.343587] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.393 14:25:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62457 00:08:41.652 [2024-11-20 14:25:42.465466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ouTxhymH3a 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:42.589 00:08:42.589 real 0m4.581s 00:08:42.589 user 0m5.733s 00:08:42.589 sys 0m0.592s 00:08:42.589 ************************************ 00:08:42.589 END TEST raid_read_error_test 00:08:42.589 ************************************ 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.589 14:25:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.589 14:25:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:42.589 14:25:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:42.589 14:25:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.589 14:25:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.848 ************************************ 00:08:42.848 START TEST raid_write_error_test 00:08:42.848 ************************************ 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:42.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9GBYns81K6 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62602 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62602 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62602 ']' 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.848 14:25:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.848 [2024-11-20 14:25:43.778936] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:42.848 [2024-11-20 14:25:43.779143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62602 ] 00:08:43.105 [2024-11-20 14:25:43.970507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.105 [2024-11-20 14:25:44.120449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.362 [2024-11-20 14:25:44.332369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.362 [2024-11-20 14:25:44.332602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.926 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.926 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:43.926 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.926 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:43.926 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.926 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 BaseBdev1_malloc 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 true 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 [2024-11-20 14:25:44.882404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:43.927 [2024-11-20 14:25:44.882492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.927 [2024-11-20 14:25:44.882527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:43.927 [2024-11-20 14:25:44.882546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.927 [2024-11-20 14:25:44.885748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.927 [2024-11-20 14:25:44.885981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:43.927 BaseBdev1 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 BaseBdev2_malloc 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 true 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 [2024-11-20 14:25:44.940999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:43.927 [2024-11-20 14:25:44.941237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.927 [2024-11-20 14:25:44.941280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:43.927 [2024-11-20 14:25:44.941301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.927 [2024-11-20 14:25:44.944372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.927 [2024-11-20 14:25:44.944554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:43.927 BaseBdev2 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 [2024-11-20 14:25:44.953136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.927 [2024-11-20 14:25:44.955844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.927 [2024-11-20 14:25:44.956144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.927 [2024-11-20 14:25:44.956170] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:43.927 [2024-11-20 14:25:44.956551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:43.927 [2024-11-20 14:25:44.956823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.927 [2024-11-20 14:25:44.956847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:43.927 [2024-11-20 14:25:44.957135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.927 14:25:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.184 14:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.184 "name": "raid_bdev1", 00:08:44.184 "uuid": "37748053-00bd-437e-b73a-6640be2424f4", 00:08:44.184 "strip_size_kb": 64, 00:08:44.184 "state": "online", 00:08:44.184 "raid_level": "concat", 00:08:44.184 "superblock": true, 00:08:44.184 "num_base_bdevs": 2, 00:08:44.184 "num_base_bdevs_discovered": 2, 00:08:44.184 "num_base_bdevs_operational": 2, 00:08:44.184 "base_bdevs_list": [ 00:08:44.184 { 00:08:44.184 "name": "BaseBdev1", 00:08:44.184 "uuid": "fa703340-58f5-515b-8455-3f9252f6d022", 00:08:44.184 "is_configured": true, 00:08:44.184 "data_offset": 2048, 00:08:44.184 "data_size": 63488 00:08:44.184 }, 00:08:44.184 { 00:08:44.184 "name": "BaseBdev2", 00:08:44.184 "uuid": "7a6889ef-0b15-5cf1-8e89-193be5e54465", 00:08:44.184 "is_configured": true, 00:08:44.184 "data_offset": 2048, 00:08:44.184 "data_size": 63488 00:08:44.184 } 00:08:44.184 ] 00:08:44.184 }' 00:08:44.184 14:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.184 14:25:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.442 14:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:44.442 14:25:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:44.699 [2024-11-20 14:25:45.614893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.632 "name": "raid_bdev1", 00:08:45.632 "uuid": "37748053-00bd-437e-b73a-6640be2424f4", 00:08:45.632 "strip_size_kb": 64, 00:08:45.632 "state": "online", 00:08:45.632 "raid_level": "concat", 00:08:45.632 "superblock": true, 00:08:45.632 "num_base_bdevs": 2, 00:08:45.632 "num_base_bdevs_discovered": 2, 00:08:45.632 "num_base_bdevs_operational": 2, 00:08:45.632 "base_bdevs_list": [ 00:08:45.632 { 00:08:45.632 "name": "BaseBdev1", 00:08:45.632 "uuid": "fa703340-58f5-515b-8455-3f9252f6d022", 00:08:45.632 "is_configured": true, 00:08:45.632 "data_offset": 2048, 00:08:45.632 "data_size": 63488 00:08:45.632 }, 00:08:45.632 { 00:08:45.632 "name": "BaseBdev2", 00:08:45.632 "uuid": "7a6889ef-0b15-5cf1-8e89-193be5e54465", 00:08:45.632 "is_configured": true, 00:08:45.632 "data_offset": 2048, 00:08:45.632 "data_size": 63488 00:08:45.632 } 00:08:45.632 ] 00:08:45.632 }' 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.632 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.197 [2024-11-20 14:25:46.988586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.197 [2024-11-20 14:25:46.988682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.197 [2024-11-20 14:25:46.992099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.197 [2024-11-20 14:25:46.992175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.197 [2024-11-20 14:25:46.992229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.197 [2024-11-20 14:25:46.992251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:46.197 { 00:08:46.197 "results": [ 00:08:46.197 { 00:08:46.197 "job": "raid_bdev1", 00:08:46.197 "core_mask": "0x1", 00:08:46.197 "workload": "randrw", 00:08:46.197 "percentage": 50, 00:08:46.197 "status": "finished", 00:08:46.197 "queue_depth": 1, 00:08:46.197 "io_size": 131072, 00:08:46.197 "runtime": 1.370783, 00:08:46.197 "iops": 9501.139129971702, 00:08:46.197 "mibps": 1187.6423912464627, 00:08:46.197 "io_failed": 1, 00:08:46.197 "io_timeout": 0, 00:08:46.197 "avg_latency_us": 147.72974266271157, 00:08:46.197 "min_latency_us": 42.123636363636365, 00:08:46.197 "max_latency_us": 1884.16 00:08:46.197 } 00:08:46.197 ], 00:08:46.197 "core_count": 1 00:08:46.197 } 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62602 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62602 ']' 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62602 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:46.197 14:25:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.197 14:25:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62602 00:08:46.197 killing process with pid 62602 00:08:46.197 14:25:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.197 14:25:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.197 14:25:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62602' 00:08:46.197 14:25:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62602 00:08:46.197 14:25:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62602 00:08:46.197 [2024-11-20 14:25:47.030491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.197 [2024-11-20 14:25:47.164543] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9GBYns81K6 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:47.570 00:08:47.570 real 0m4.740s 00:08:47.570 user 0m5.932s 00:08:47.570 sys 0m0.600s 00:08:47.570 ************************************ 00:08:47.570 END TEST raid_write_error_test 00:08:47.570 ************************************ 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.570 14:25:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.570 14:25:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:47.570 14:25:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:47.570 14:25:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:47.570 14:25:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.570 14:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.570 ************************************ 00:08:47.570 START TEST raid_state_function_test 00:08:47.570 ************************************ 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62746 00:08:47.570 Process raid pid: 62746 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62746' 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62746 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62746 ']' 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.570 14:25:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.570 [2024-11-20 14:25:48.550039] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:47.570 [2024-11-20 14:25:48.550223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.845 [2024-11-20 14:25:48.729926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.845 [2024-11-20 14:25:48.882155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.103 [2024-11-20 14:25:49.113289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.103 [2024-11-20 14:25:49.113369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.666 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.666 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:48.666 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.666 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.666 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.666 [2024-11-20 14:25:49.575762] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.666 [2024-11-20 14:25:49.575872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.666 [2024-11-20 14:25:49.575894] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.666 [2024-11-20 14:25:49.575915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.666 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.666 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.667 "name": "Existed_Raid", 00:08:48.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.667 "strip_size_kb": 0, 00:08:48.667 "state": "configuring", 00:08:48.667 "raid_level": "raid1", 00:08:48.667 "superblock": false, 00:08:48.667 "num_base_bdevs": 2, 00:08:48.667 "num_base_bdevs_discovered": 0, 00:08:48.667 "num_base_bdevs_operational": 2, 00:08:48.667 "base_bdevs_list": [ 00:08:48.667 { 00:08:48.667 "name": "BaseBdev1", 00:08:48.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.667 "is_configured": false, 00:08:48.667 "data_offset": 0, 00:08:48.667 "data_size": 0 00:08:48.667 }, 00:08:48.667 { 00:08:48.667 "name": "BaseBdev2", 00:08:48.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.667 "is_configured": false, 00:08:48.667 "data_offset": 0, 00:08:48.667 "data_size": 0 00:08:48.667 } 00:08:48.667 ] 00:08:48.667 }' 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.667 14:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.234 [2024-11-20 14:25:50.075833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.234 [2024-11-20 14:25:50.075905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.234 [2024-11-20 14:25:50.087839] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.234 [2024-11-20 14:25:50.087922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.234 [2024-11-20 14:25:50.087941] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.234 [2024-11-20 14:25:50.087964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.234 [2024-11-20 14:25:50.138926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.234 BaseBdev1 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.234 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.234 [ 00:08:49.234 { 00:08:49.234 "name": "BaseBdev1", 00:08:49.234 "aliases": [ 00:08:49.234 "2a3155ed-fec3-4d3e-b1ee-e37fa6464453" 00:08:49.234 ], 00:08:49.234 "product_name": "Malloc disk", 00:08:49.234 "block_size": 512, 00:08:49.234 "num_blocks": 65536, 00:08:49.234 "uuid": "2a3155ed-fec3-4d3e-b1ee-e37fa6464453", 00:08:49.234 "assigned_rate_limits": { 00:08:49.234 "rw_ios_per_sec": 0, 00:08:49.234 "rw_mbytes_per_sec": 0, 00:08:49.234 "r_mbytes_per_sec": 0, 00:08:49.234 "w_mbytes_per_sec": 0 00:08:49.234 }, 00:08:49.234 "claimed": true, 00:08:49.234 "claim_type": "exclusive_write", 00:08:49.234 "zoned": false, 00:08:49.234 "supported_io_types": { 00:08:49.234 "read": true, 00:08:49.234 "write": true, 00:08:49.234 "unmap": true, 00:08:49.234 "flush": true, 00:08:49.234 "reset": true, 00:08:49.235 "nvme_admin": false, 00:08:49.235 "nvme_io": false, 00:08:49.235 "nvme_io_md": false, 00:08:49.235 "write_zeroes": true, 00:08:49.235 "zcopy": true, 00:08:49.235 "get_zone_info": false, 00:08:49.235 "zone_management": false, 00:08:49.235 "zone_append": false, 00:08:49.235 "compare": false, 00:08:49.235 "compare_and_write": false, 00:08:49.235 "abort": true, 00:08:49.235 "seek_hole": false, 00:08:49.235 "seek_data": false, 00:08:49.235 "copy": true, 00:08:49.235 "nvme_iov_md": false 00:08:49.235 }, 00:08:49.235 "memory_domains": [ 00:08:49.235 { 00:08:49.235 "dma_device_id": "system", 00:08:49.235 "dma_device_type": 1 00:08:49.235 }, 00:08:49.235 { 00:08:49.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.235 "dma_device_type": 2 00:08:49.235 } 00:08:49.235 ], 00:08:49.235 "driver_specific": {} 00:08:49.235 } 00:08:49.235 ] 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.235 "name": "Existed_Raid", 00:08:49.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.235 "strip_size_kb": 0, 00:08:49.235 "state": "configuring", 00:08:49.235 "raid_level": "raid1", 00:08:49.235 "superblock": false, 00:08:49.235 "num_base_bdevs": 2, 00:08:49.235 "num_base_bdevs_discovered": 1, 00:08:49.235 "num_base_bdevs_operational": 2, 00:08:49.235 "base_bdevs_list": [ 00:08:49.235 { 00:08:49.235 "name": "BaseBdev1", 00:08:49.235 "uuid": "2a3155ed-fec3-4d3e-b1ee-e37fa6464453", 00:08:49.235 "is_configured": true, 00:08:49.235 "data_offset": 0, 00:08:49.235 "data_size": 65536 00:08:49.235 }, 00:08:49.235 { 00:08:49.235 "name": "BaseBdev2", 00:08:49.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.235 "is_configured": false, 00:08:49.235 "data_offset": 0, 00:08:49.235 "data_size": 0 00:08:49.235 } 00:08:49.235 ] 00:08:49.235 }' 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.235 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.803 [2024-11-20 14:25:50.679157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.803 [2024-11-20 14:25:50.679252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.803 [2024-11-20 14:25:50.687151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.803 [2024-11-20 14:25:50.689842] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.803 [2024-11-20 14:25:50.689909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.803 "name": "Existed_Raid", 00:08:49.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.803 "strip_size_kb": 0, 00:08:49.803 "state": "configuring", 00:08:49.803 "raid_level": "raid1", 00:08:49.803 "superblock": false, 00:08:49.803 "num_base_bdevs": 2, 00:08:49.803 "num_base_bdevs_discovered": 1, 00:08:49.803 "num_base_bdevs_operational": 2, 00:08:49.803 "base_bdevs_list": [ 00:08:49.803 { 00:08:49.803 "name": "BaseBdev1", 00:08:49.803 "uuid": "2a3155ed-fec3-4d3e-b1ee-e37fa6464453", 00:08:49.803 "is_configured": true, 00:08:49.803 "data_offset": 0, 00:08:49.803 "data_size": 65536 00:08:49.803 }, 00:08:49.803 { 00:08:49.803 "name": "BaseBdev2", 00:08:49.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.803 "is_configured": false, 00:08:49.803 "data_offset": 0, 00:08:49.803 "data_size": 0 00:08:49.803 } 00:08:49.803 ] 00:08:49.803 }' 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.803 14:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.372 [2024-11-20 14:25:51.230278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.372 [2024-11-20 14:25:51.230387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.372 [2024-11-20 14:25:51.230404] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:50.372 [2024-11-20 14:25:51.230808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.372 [2024-11-20 14:25:51.231096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.372 [2024-11-20 14:25:51.231132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.372 [2024-11-20 14:25:51.231536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.372 BaseBdev2 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.372 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.373 [ 00:08:50.373 { 00:08:50.373 "name": "BaseBdev2", 00:08:50.373 "aliases": [ 00:08:50.373 "d40d72a4-7db4-428a-8683-1c3b2a26784d" 00:08:50.373 ], 00:08:50.373 "product_name": "Malloc disk", 00:08:50.373 "block_size": 512, 00:08:50.373 "num_blocks": 65536, 00:08:50.373 "uuid": "d40d72a4-7db4-428a-8683-1c3b2a26784d", 00:08:50.373 "assigned_rate_limits": { 00:08:50.373 "rw_ios_per_sec": 0, 00:08:50.373 "rw_mbytes_per_sec": 0, 00:08:50.373 "r_mbytes_per_sec": 0, 00:08:50.373 "w_mbytes_per_sec": 0 00:08:50.373 }, 00:08:50.373 "claimed": true, 00:08:50.373 "claim_type": "exclusive_write", 00:08:50.373 "zoned": false, 00:08:50.373 "supported_io_types": { 00:08:50.373 "read": true, 00:08:50.373 "write": true, 00:08:50.373 "unmap": true, 00:08:50.373 "flush": true, 00:08:50.373 "reset": true, 00:08:50.373 "nvme_admin": false, 00:08:50.373 "nvme_io": false, 00:08:50.373 "nvme_io_md": false, 00:08:50.373 "write_zeroes": true, 00:08:50.373 "zcopy": true, 00:08:50.373 "get_zone_info": false, 00:08:50.373 "zone_management": false, 00:08:50.373 "zone_append": false, 00:08:50.373 "compare": false, 00:08:50.373 "compare_and_write": false, 00:08:50.373 "abort": true, 00:08:50.373 "seek_hole": false, 00:08:50.373 "seek_data": false, 00:08:50.373 "copy": true, 00:08:50.373 "nvme_iov_md": false 00:08:50.373 }, 00:08:50.373 "memory_domains": [ 00:08:50.373 { 00:08:50.373 "dma_device_id": "system", 00:08:50.373 "dma_device_type": 1 00:08:50.373 }, 00:08:50.373 { 00:08:50.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.373 "dma_device_type": 2 00:08:50.373 } 00:08:50.373 ], 00:08:50.373 "driver_specific": {} 00:08:50.373 } 00:08:50.373 ] 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.373 "name": "Existed_Raid", 00:08:50.373 "uuid": "677fa551-04a3-4c0e-8f0c-010ef58b9f86", 00:08:50.373 "strip_size_kb": 0, 00:08:50.373 "state": "online", 00:08:50.373 "raid_level": "raid1", 00:08:50.373 "superblock": false, 00:08:50.373 "num_base_bdevs": 2, 00:08:50.373 "num_base_bdevs_discovered": 2, 00:08:50.373 "num_base_bdevs_operational": 2, 00:08:50.373 "base_bdevs_list": [ 00:08:50.373 { 00:08:50.373 "name": "BaseBdev1", 00:08:50.373 "uuid": "2a3155ed-fec3-4d3e-b1ee-e37fa6464453", 00:08:50.373 "is_configured": true, 00:08:50.373 "data_offset": 0, 00:08:50.373 "data_size": 65536 00:08:50.373 }, 00:08:50.373 { 00:08:50.373 "name": "BaseBdev2", 00:08:50.373 "uuid": "d40d72a4-7db4-428a-8683-1c3b2a26784d", 00:08:50.373 "is_configured": true, 00:08:50.373 "data_offset": 0, 00:08:50.373 "data_size": 65536 00:08:50.373 } 00:08:50.373 ] 00:08:50.373 }' 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.373 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 [2024-11-20 14:25:51.790862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.941 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.941 "name": "Existed_Raid", 00:08:50.941 "aliases": [ 00:08:50.941 "677fa551-04a3-4c0e-8f0c-010ef58b9f86" 00:08:50.941 ], 00:08:50.941 "product_name": "Raid Volume", 00:08:50.941 "block_size": 512, 00:08:50.941 "num_blocks": 65536, 00:08:50.941 "uuid": "677fa551-04a3-4c0e-8f0c-010ef58b9f86", 00:08:50.941 "assigned_rate_limits": { 00:08:50.941 "rw_ios_per_sec": 0, 00:08:50.941 "rw_mbytes_per_sec": 0, 00:08:50.941 "r_mbytes_per_sec": 0, 00:08:50.941 "w_mbytes_per_sec": 0 00:08:50.941 }, 00:08:50.941 "claimed": false, 00:08:50.941 "zoned": false, 00:08:50.941 "supported_io_types": { 00:08:50.941 "read": true, 00:08:50.941 "write": true, 00:08:50.941 "unmap": false, 00:08:50.941 "flush": false, 00:08:50.941 "reset": true, 00:08:50.941 "nvme_admin": false, 00:08:50.941 "nvme_io": false, 00:08:50.941 "nvme_io_md": false, 00:08:50.941 "write_zeroes": true, 00:08:50.941 "zcopy": false, 00:08:50.941 "get_zone_info": false, 00:08:50.941 "zone_management": false, 00:08:50.941 "zone_append": false, 00:08:50.941 "compare": false, 00:08:50.941 "compare_and_write": false, 00:08:50.941 "abort": false, 00:08:50.941 "seek_hole": false, 00:08:50.941 "seek_data": false, 00:08:50.941 "copy": false, 00:08:50.941 "nvme_iov_md": false 00:08:50.941 }, 00:08:50.941 "memory_domains": [ 00:08:50.941 { 00:08:50.941 "dma_device_id": "system", 00:08:50.941 "dma_device_type": 1 00:08:50.941 }, 00:08:50.941 { 00:08:50.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.941 "dma_device_type": 2 00:08:50.941 }, 00:08:50.941 { 00:08:50.941 "dma_device_id": "system", 00:08:50.941 "dma_device_type": 1 00:08:50.941 }, 00:08:50.941 { 00:08:50.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.941 "dma_device_type": 2 00:08:50.941 } 00:08:50.941 ], 00:08:50.941 "driver_specific": { 00:08:50.941 "raid": { 00:08:50.941 "uuid": "677fa551-04a3-4c0e-8f0c-010ef58b9f86", 00:08:50.941 "strip_size_kb": 0, 00:08:50.941 "state": "online", 00:08:50.941 "raid_level": "raid1", 00:08:50.941 "superblock": false, 00:08:50.941 "num_base_bdevs": 2, 00:08:50.941 "num_base_bdevs_discovered": 2, 00:08:50.941 "num_base_bdevs_operational": 2, 00:08:50.941 "base_bdevs_list": [ 00:08:50.941 { 00:08:50.941 "name": "BaseBdev1", 00:08:50.941 "uuid": "2a3155ed-fec3-4d3e-b1ee-e37fa6464453", 00:08:50.941 "is_configured": true, 00:08:50.941 "data_offset": 0, 00:08:50.941 "data_size": 65536 00:08:50.941 }, 00:08:50.941 { 00:08:50.941 "name": "BaseBdev2", 00:08:50.941 "uuid": "d40d72a4-7db4-428a-8683-1c3b2a26784d", 00:08:50.941 "is_configured": true, 00:08:50.941 "data_offset": 0, 00:08:50.941 "data_size": 65536 00:08:50.941 } 00:08:50.941 ] 00:08:50.941 } 00:08:50.941 } 00:08:50.941 }' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.942 BaseBdev2' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.942 14:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 [2024-11-20 14:25:52.050660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.201 "name": "Existed_Raid", 00:08:51.201 "uuid": "677fa551-04a3-4c0e-8f0c-010ef58b9f86", 00:08:51.201 "strip_size_kb": 0, 00:08:51.201 "state": "online", 00:08:51.201 "raid_level": "raid1", 00:08:51.201 "superblock": false, 00:08:51.201 "num_base_bdevs": 2, 00:08:51.201 "num_base_bdevs_discovered": 1, 00:08:51.201 "num_base_bdevs_operational": 1, 00:08:51.201 "base_bdevs_list": [ 00:08:51.201 { 00:08:51.201 "name": null, 00:08:51.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.201 "is_configured": false, 00:08:51.201 "data_offset": 0, 00:08:51.201 "data_size": 65536 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "name": "BaseBdev2", 00:08:51.201 "uuid": "d40d72a4-7db4-428a-8683-1c3b2a26784d", 00:08:51.201 "is_configured": true, 00:08:51.201 "data_offset": 0, 00:08:51.201 "data_size": 65536 00:08:51.201 } 00:08:51.201 ] 00:08:51.201 }' 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.201 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.768 [2024-11-20 14:25:52.696034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.768 [2024-11-20 14:25:52.696195] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.768 [2024-11-20 14:25:52.791941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.768 [2024-11-20 14:25:52.792039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.768 [2024-11-20 14:25:52.792066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.768 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62746 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62746 ']' 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62746 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62746 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.027 killing process with pid 62746 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62746' 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62746 00:08:52.027 [2024-11-20 14:25:52.880955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.027 14:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62746 00:08:52.027 [2024-11-20 14:25:52.896568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.402 14:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.402 00:08:53.402 real 0m5.595s 00:08:53.402 user 0m8.304s 00:08:53.402 sys 0m0.862s 00:08:53.402 14:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.402 ************************************ 00:08:53.402 END TEST raid_state_function_test 00:08:53.402 ************************************ 00:08:53.402 14:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.402 14:25:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:53.402 14:25:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.402 14:25:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.402 14:25:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.402 ************************************ 00:08:53.402 START TEST raid_state_function_test_sb 00:08:53.402 ************************************ 00:08:53.402 14:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:53.402 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63004 00:08:53.403 Process raid pid: 63004 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63004' 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63004 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63004 ']' 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.403 14:25:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.403 [2024-11-20 14:25:54.209837] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:53.403 [2024-11-20 14:25:54.210031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.403 [2024-11-20 14:25:54.394571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.661 [2024-11-20 14:25:54.546563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.932 [2024-11-20 14:25:54.778132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.932 [2024-11-20 14:25:54.778190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 [2024-11-20 14:25:55.193543] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.270 [2024-11-20 14:25:55.193648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.270 [2024-11-20 14:25:55.193673] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.270 [2024-11-20 14:25:55.193696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.270 "name": "Existed_Raid", 00:08:54.270 "uuid": "2223a05f-2845-44f4-bc54-a895780bb065", 00:08:54.270 "strip_size_kb": 0, 00:08:54.270 "state": "configuring", 00:08:54.270 "raid_level": "raid1", 00:08:54.270 "superblock": true, 00:08:54.270 "num_base_bdevs": 2, 00:08:54.270 "num_base_bdevs_discovered": 0, 00:08:54.270 "num_base_bdevs_operational": 2, 00:08:54.270 "base_bdevs_list": [ 00:08:54.270 { 00:08:54.270 "name": "BaseBdev1", 00:08:54.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.270 "is_configured": false, 00:08:54.270 "data_offset": 0, 00:08:54.270 "data_size": 0 00:08:54.270 }, 00:08:54.270 { 00:08:54.270 "name": "BaseBdev2", 00:08:54.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.270 "is_configured": false, 00:08:54.270 "data_offset": 0, 00:08:54.270 "data_size": 0 00:08:54.270 } 00:08:54.270 ] 00:08:54.270 }' 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.270 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.838 [2024-11-20 14:25:55.709641] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.838 [2024-11-20 14:25:55.709701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.838 [2024-11-20 14:25:55.717559] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.838 [2024-11-20 14:25:55.717647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.838 [2024-11-20 14:25:55.717670] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.838 [2024-11-20 14:25:55.717713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.838 [2024-11-20 14:25:55.766824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.838 BaseBdev1 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.838 [ 00:08:54.838 { 00:08:54.838 "name": "BaseBdev1", 00:08:54.838 "aliases": [ 00:08:54.838 "04ff3010-69e2-4227-880b-f8d50ed9164f" 00:08:54.838 ], 00:08:54.838 "product_name": "Malloc disk", 00:08:54.838 "block_size": 512, 00:08:54.838 "num_blocks": 65536, 00:08:54.838 "uuid": "04ff3010-69e2-4227-880b-f8d50ed9164f", 00:08:54.838 "assigned_rate_limits": { 00:08:54.838 "rw_ios_per_sec": 0, 00:08:54.838 "rw_mbytes_per_sec": 0, 00:08:54.838 "r_mbytes_per_sec": 0, 00:08:54.838 "w_mbytes_per_sec": 0 00:08:54.838 }, 00:08:54.838 "claimed": true, 00:08:54.838 "claim_type": "exclusive_write", 00:08:54.838 "zoned": false, 00:08:54.838 "supported_io_types": { 00:08:54.838 "read": true, 00:08:54.838 "write": true, 00:08:54.838 "unmap": true, 00:08:54.838 "flush": true, 00:08:54.838 "reset": true, 00:08:54.838 "nvme_admin": false, 00:08:54.838 "nvme_io": false, 00:08:54.838 "nvme_io_md": false, 00:08:54.838 "write_zeroes": true, 00:08:54.838 "zcopy": true, 00:08:54.838 "get_zone_info": false, 00:08:54.838 "zone_management": false, 00:08:54.838 "zone_append": false, 00:08:54.838 "compare": false, 00:08:54.838 "compare_and_write": false, 00:08:54.838 "abort": true, 00:08:54.838 "seek_hole": false, 00:08:54.838 "seek_data": false, 00:08:54.838 "copy": true, 00:08:54.838 "nvme_iov_md": false 00:08:54.838 }, 00:08:54.838 "memory_domains": [ 00:08:54.838 { 00:08:54.838 "dma_device_id": "system", 00:08:54.838 "dma_device_type": 1 00:08:54.838 }, 00:08:54.838 { 00:08:54.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.838 "dma_device_type": 2 00:08:54.838 } 00:08:54.838 ], 00:08:54.838 "driver_specific": {} 00:08:54.838 } 00:08:54.838 ] 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:54.838 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.839 "name": "Existed_Raid", 00:08:54.839 "uuid": "f8d50b93-47ff-4733-8a6f-ffbfb7eb77e9", 00:08:54.839 "strip_size_kb": 0, 00:08:54.839 "state": "configuring", 00:08:54.839 "raid_level": "raid1", 00:08:54.839 "superblock": true, 00:08:54.839 "num_base_bdevs": 2, 00:08:54.839 "num_base_bdevs_discovered": 1, 00:08:54.839 "num_base_bdevs_operational": 2, 00:08:54.839 "base_bdevs_list": [ 00:08:54.839 { 00:08:54.839 "name": "BaseBdev1", 00:08:54.839 "uuid": "04ff3010-69e2-4227-880b-f8d50ed9164f", 00:08:54.839 "is_configured": true, 00:08:54.839 "data_offset": 2048, 00:08:54.839 "data_size": 63488 00:08:54.839 }, 00:08:54.839 { 00:08:54.839 "name": "BaseBdev2", 00:08:54.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.839 "is_configured": false, 00:08:54.839 "data_offset": 0, 00:08:54.839 "data_size": 0 00:08:54.839 } 00:08:54.839 ] 00:08:54.839 }' 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.839 14:25:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.406 [2024-11-20 14:25:56.295039] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.406 [2024-11-20 14:25:56.295122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.406 [2024-11-20 14:25:56.303053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.406 [2024-11-20 14:25:56.305657] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.406 [2024-11-20 14:25:56.305723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.406 "name": "Existed_Raid", 00:08:55.406 "uuid": "977f422f-117f-409e-9417-143298d78175", 00:08:55.406 "strip_size_kb": 0, 00:08:55.406 "state": "configuring", 00:08:55.406 "raid_level": "raid1", 00:08:55.406 "superblock": true, 00:08:55.406 "num_base_bdevs": 2, 00:08:55.406 "num_base_bdevs_discovered": 1, 00:08:55.406 "num_base_bdevs_operational": 2, 00:08:55.406 "base_bdevs_list": [ 00:08:55.406 { 00:08:55.406 "name": "BaseBdev1", 00:08:55.406 "uuid": "04ff3010-69e2-4227-880b-f8d50ed9164f", 00:08:55.406 "is_configured": true, 00:08:55.406 "data_offset": 2048, 00:08:55.406 "data_size": 63488 00:08:55.406 }, 00:08:55.406 { 00:08:55.406 "name": "BaseBdev2", 00:08:55.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.406 "is_configured": false, 00:08:55.406 "data_offset": 0, 00:08:55.406 "data_size": 0 00:08:55.406 } 00:08:55.406 ] 00:08:55.406 }' 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.406 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 [2024-11-20 14:25:56.861143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.974 [2024-11-20 14:25:56.861521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.974 [2024-11-20 14:25:56.861543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.974 [2024-11-20 14:25:56.861922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:55.974 BaseBdev2 00:08:55.974 [2024-11-20 14:25:56.862190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.974 [2024-11-20 14:25:56.862233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:55.974 [2024-11-20 14:25:56.862431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 [ 00:08:55.974 { 00:08:55.974 "name": "BaseBdev2", 00:08:55.974 "aliases": [ 00:08:55.974 "9c122b1a-7dd2-48f9-9b86-69d6552a0bb4" 00:08:55.974 ], 00:08:55.974 "product_name": "Malloc disk", 00:08:55.974 "block_size": 512, 00:08:55.974 "num_blocks": 65536, 00:08:55.974 "uuid": "9c122b1a-7dd2-48f9-9b86-69d6552a0bb4", 00:08:55.974 "assigned_rate_limits": { 00:08:55.974 "rw_ios_per_sec": 0, 00:08:55.974 "rw_mbytes_per_sec": 0, 00:08:55.974 "r_mbytes_per_sec": 0, 00:08:55.974 "w_mbytes_per_sec": 0 00:08:55.974 }, 00:08:55.974 "claimed": true, 00:08:55.974 "claim_type": "exclusive_write", 00:08:55.974 "zoned": false, 00:08:55.974 "supported_io_types": { 00:08:55.974 "read": true, 00:08:55.974 "write": true, 00:08:55.974 "unmap": true, 00:08:55.974 "flush": true, 00:08:55.974 "reset": true, 00:08:55.974 "nvme_admin": false, 00:08:55.974 "nvme_io": false, 00:08:55.974 "nvme_io_md": false, 00:08:55.974 "write_zeroes": true, 00:08:55.974 "zcopy": true, 00:08:55.974 "get_zone_info": false, 00:08:55.974 "zone_management": false, 00:08:55.974 "zone_append": false, 00:08:55.974 "compare": false, 00:08:55.974 "compare_and_write": false, 00:08:55.974 "abort": true, 00:08:55.974 "seek_hole": false, 00:08:55.974 "seek_data": false, 00:08:55.974 "copy": true, 00:08:55.974 "nvme_iov_md": false 00:08:55.974 }, 00:08:55.974 "memory_domains": [ 00:08:55.974 { 00:08:55.974 "dma_device_id": "system", 00:08:55.974 "dma_device_type": 1 00:08:55.974 }, 00:08:55.974 { 00:08:55.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.974 "dma_device_type": 2 00:08:55.974 } 00:08:55.974 ], 00:08:55.974 "driver_specific": {} 00:08:55.974 } 00:08:55.974 ] 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.974 "name": "Existed_Raid", 00:08:55.974 "uuid": "977f422f-117f-409e-9417-143298d78175", 00:08:55.974 "strip_size_kb": 0, 00:08:55.974 "state": "online", 00:08:55.974 "raid_level": "raid1", 00:08:55.974 "superblock": true, 00:08:55.974 "num_base_bdevs": 2, 00:08:55.974 "num_base_bdevs_discovered": 2, 00:08:55.974 "num_base_bdevs_operational": 2, 00:08:55.974 "base_bdevs_list": [ 00:08:55.974 { 00:08:55.974 "name": "BaseBdev1", 00:08:55.974 "uuid": "04ff3010-69e2-4227-880b-f8d50ed9164f", 00:08:55.974 "is_configured": true, 00:08:55.974 "data_offset": 2048, 00:08:55.974 "data_size": 63488 00:08:55.974 }, 00:08:55.974 { 00:08:55.974 "name": "BaseBdev2", 00:08:55.974 "uuid": "9c122b1a-7dd2-48f9-9b86-69d6552a0bb4", 00:08:55.974 "is_configured": true, 00:08:55.974 "data_offset": 2048, 00:08:55.974 "data_size": 63488 00:08:55.974 } 00:08:55.974 ] 00:08:55.974 }' 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.974 14:25:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 [2024-11-20 14:25:57.417749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.542 "name": "Existed_Raid", 00:08:56.542 "aliases": [ 00:08:56.542 "977f422f-117f-409e-9417-143298d78175" 00:08:56.542 ], 00:08:56.542 "product_name": "Raid Volume", 00:08:56.542 "block_size": 512, 00:08:56.542 "num_blocks": 63488, 00:08:56.542 "uuid": "977f422f-117f-409e-9417-143298d78175", 00:08:56.542 "assigned_rate_limits": { 00:08:56.542 "rw_ios_per_sec": 0, 00:08:56.542 "rw_mbytes_per_sec": 0, 00:08:56.542 "r_mbytes_per_sec": 0, 00:08:56.542 "w_mbytes_per_sec": 0 00:08:56.542 }, 00:08:56.542 "claimed": false, 00:08:56.542 "zoned": false, 00:08:56.542 "supported_io_types": { 00:08:56.542 "read": true, 00:08:56.542 "write": true, 00:08:56.542 "unmap": false, 00:08:56.542 "flush": false, 00:08:56.542 "reset": true, 00:08:56.542 "nvme_admin": false, 00:08:56.542 "nvme_io": false, 00:08:56.542 "nvme_io_md": false, 00:08:56.542 "write_zeroes": true, 00:08:56.542 "zcopy": false, 00:08:56.542 "get_zone_info": false, 00:08:56.542 "zone_management": false, 00:08:56.542 "zone_append": false, 00:08:56.542 "compare": false, 00:08:56.542 "compare_and_write": false, 00:08:56.542 "abort": false, 00:08:56.542 "seek_hole": false, 00:08:56.542 "seek_data": false, 00:08:56.542 "copy": false, 00:08:56.542 "nvme_iov_md": false 00:08:56.542 }, 00:08:56.542 "memory_domains": [ 00:08:56.542 { 00:08:56.542 "dma_device_id": "system", 00:08:56.542 "dma_device_type": 1 00:08:56.542 }, 00:08:56.542 { 00:08:56.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.542 "dma_device_type": 2 00:08:56.542 }, 00:08:56.542 { 00:08:56.542 "dma_device_id": "system", 00:08:56.542 "dma_device_type": 1 00:08:56.542 }, 00:08:56.542 { 00:08:56.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.542 "dma_device_type": 2 00:08:56.542 } 00:08:56.542 ], 00:08:56.542 "driver_specific": { 00:08:56.542 "raid": { 00:08:56.542 "uuid": "977f422f-117f-409e-9417-143298d78175", 00:08:56.542 "strip_size_kb": 0, 00:08:56.542 "state": "online", 00:08:56.542 "raid_level": "raid1", 00:08:56.542 "superblock": true, 00:08:56.542 "num_base_bdevs": 2, 00:08:56.542 "num_base_bdevs_discovered": 2, 00:08:56.542 "num_base_bdevs_operational": 2, 00:08:56.542 "base_bdevs_list": [ 00:08:56.542 { 00:08:56.542 "name": "BaseBdev1", 00:08:56.542 "uuid": "04ff3010-69e2-4227-880b-f8d50ed9164f", 00:08:56.542 "is_configured": true, 00:08:56.542 "data_offset": 2048, 00:08:56.542 "data_size": 63488 00:08:56.542 }, 00:08:56.542 { 00:08:56.542 "name": "BaseBdev2", 00:08:56.542 "uuid": "9c122b1a-7dd2-48f9-9b86-69d6552a0bb4", 00:08:56.542 "is_configured": true, 00:08:56.542 "data_offset": 2048, 00:08:56.542 "data_size": 63488 00:08:56.542 } 00:08:56.542 ] 00:08:56.542 } 00:08:56.542 } 00:08:56.542 }' 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:56.542 BaseBdev2' 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.542 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.801 [2024-11-20 14:25:57.677469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.801 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.802 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.802 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.802 "name": "Existed_Raid", 00:08:56.802 "uuid": "977f422f-117f-409e-9417-143298d78175", 00:08:56.802 "strip_size_kb": 0, 00:08:56.802 "state": "online", 00:08:56.802 "raid_level": "raid1", 00:08:56.802 "superblock": true, 00:08:56.802 "num_base_bdevs": 2, 00:08:56.802 "num_base_bdevs_discovered": 1, 00:08:56.802 "num_base_bdevs_operational": 1, 00:08:56.802 "base_bdevs_list": [ 00:08:56.802 { 00:08:56.802 "name": null, 00:08:56.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.802 "is_configured": false, 00:08:56.802 "data_offset": 0, 00:08:56.802 "data_size": 63488 00:08:56.802 }, 00:08:56.802 { 00:08:56.802 "name": "BaseBdev2", 00:08:56.802 "uuid": "9c122b1a-7dd2-48f9-9b86-69d6552a0bb4", 00:08:56.802 "is_configured": true, 00:08:56.802 "data_offset": 2048, 00:08:56.802 "data_size": 63488 00:08:56.802 } 00:08:56.802 ] 00:08:56.802 }' 00:08:56.802 14:25:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.802 14:25:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.369 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.369 [2024-11-20 14:25:58.354502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.369 [2024-11-20 14:25:58.354693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.628 [2024-11-20 14:25:58.447526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.628 [2024-11-20 14:25:58.447615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.628 [2024-11-20 14:25:58.447664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63004 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63004 ']' 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63004 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63004 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.628 killing process with pid 63004 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63004' 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63004 00:08:57.628 [2024-11-20 14:25:58.552613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.628 14:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63004 00:08:57.628 [2024-11-20 14:25:58.568249] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.003 14:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:59.003 00:08:59.003 real 0m5.666s 00:08:59.003 user 0m8.426s 00:08:59.003 sys 0m0.842s 00:08:59.003 14:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.003 ************************************ 00:08:59.003 END TEST raid_state_function_test_sb 00:08:59.003 14:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.003 ************************************ 00:08:59.003 14:25:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:59.003 14:25:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:59.003 14:25:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.003 14:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.003 ************************************ 00:08:59.003 START TEST raid_superblock_test 00:08:59.003 ************************************ 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63262 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63262 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63262 ']' 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.003 14:25:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.003 [2024-11-20 14:25:59.925374] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:08:59.004 [2024-11-20 14:25:59.925568] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63262 ] 00:08:59.262 [2024-11-20 14:26:00.116995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.262 [2024-11-20 14:26:00.273337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.522 [2024-11-20 14:26:00.494138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.522 [2024-11-20 14:26:00.494195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.090 malloc1 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.090 [2024-11-20 14:26:00.980028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.090 [2024-11-20 14:26:00.980102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.090 [2024-11-20 14:26:00.980135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:00.090 [2024-11-20 14:26:00.980153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.090 [2024-11-20 14:26:00.982996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.090 [2024-11-20 14:26:00.983056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.090 pt1 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.090 14:26:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.090 malloc2 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.090 [2024-11-20 14:26:01.036412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.090 [2024-11-20 14:26:01.036481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.090 [2024-11-20 14:26:01.036520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:00.090 [2024-11-20 14:26:01.036535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.090 [2024-11-20 14:26:01.039444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.090 [2024-11-20 14:26:01.039491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.090 pt2 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.090 [2024-11-20 14:26:01.048479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.090 [2024-11-20 14:26:01.050951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.090 [2024-11-20 14:26:01.051185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:00.090 [2024-11-20 14:26:01.051210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:00.090 [2024-11-20 14:26:01.051512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:00.090 [2024-11-20 14:26:01.051752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:00.090 [2024-11-20 14:26:01.051787] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:00.090 [2024-11-20 14:26:01.051960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.090 "name": "raid_bdev1", 00:09:00.090 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:00.090 "strip_size_kb": 0, 00:09:00.090 "state": "online", 00:09:00.090 "raid_level": "raid1", 00:09:00.090 "superblock": true, 00:09:00.090 "num_base_bdevs": 2, 00:09:00.090 "num_base_bdevs_discovered": 2, 00:09:00.090 "num_base_bdevs_operational": 2, 00:09:00.090 "base_bdevs_list": [ 00:09:00.090 { 00:09:00.090 "name": "pt1", 00:09:00.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.090 "is_configured": true, 00:09:00.090 "data_offset": 2048, 00:09:00.090 "data_size": 63488 00:09:00.090 }, 00:09:00.090 { 00:09:00.090 "name": "pt2", 00:09:00.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.090 "is_configured": true, 00:09:00.090 "data_offset": 2048, 00:09:00.090 "data_size": 63488 00:09:00.090 } 00:09:00.090 ] 00:09:00.090 }' 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.090 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.667 [2024-11-20 14:26:01.572975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.667 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.667 "name": "raid_bdev1", 00:09:00.667 "aliases": [ 00:09:00.667 "a0beee26-20f0-43de-a2ae-4b490c54ecfc" 00:09:00.667 ], 00:09:00.668 "product_name": "Raid Volume", 00:09:00.668 "block_size": 512, 00:09:00.668 "num_blocks": 63488, 00:09:00.668 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:00.668 "assigned_rate_limits": { 00:09:00.668 "rw_ios_per_sec": 0, 00:09:00.668 "rw_mbytes_per_sec": 0, 00:09:00.668 "r_mbytes_per_sec": 0, 00:09:00.668 "w_mbytes_per_sec": 0 00:09:00.668 }, 00:09:00.668 "claimed": false, 00:09:00.668 "zoned": false, 00:09:00.668 "supported_io_types": { 00:09:00.668 "read": true, 00:09:00.668 "write": true, 00:09:00.668 "unmap": false, 00:09:00.668 "flush": false, 00:09:00.668 "reset": true, 00:09:00.668 "nvme_admin": false, 00:09:00.668 "nvme_io": false, 00:09:00.668 "nvme_io_md": false, 00:09:00.668 "write_zeroes": true, 00:09:00.668 "zcopy": false, 00:09:00.668 "get_zone_info": false, 00:09:00.668 "zone_management": false, 00:09:00.668 "zone_append": false, 00:09:00.668 "compare": false, 00:09:00.668 "compare_and_write": false, 00:09:00.668 "abort": false, 00:09:00.668 "seek_hole": false, 00:09:00.668 "seek_data": false, 00:09:00.668 "copy": false, 00:09:00.668 "nvme_iov_md": false 00:09:00.668 }, 00:09:00.668 "memory_domains": [ 00:09:00.668 { 00:09:00.668 "dma_device_id": "system", 00:09:00.668 "dma_device_type": 1 00:09:00.668 }, 00:09:00.668 { 00:09:00.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.668 "dma_device_type": 2 00:09:00.668 }, 00:09:00.668 { 00:09:00.668 "dma_device_id": "system", 00:09:00.668 "dma_device_type": 1 00:09:00.668 }, 00:09:00.668 { 00:09:00.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.668 "dma_device_type": 2 00:09:00.668 } 00:09:00.668 ], 00:09:00.668 "driver_specific": { 00:09:00.668 "raid": { 00:09:00.668 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:00.668 "strip_size_kb": 0, 00:09:00.668 "state": "online", 00:09:00.668 "raid_level": "raid1", 00:09:00.668 "superblock": true, 00:09:00.668 "num_base_bdevs": 2, 00:09:00.668 "num_base_bdevs_discovered": 2, 00:09:00.668 "num_base_bdevs_operational": 2, 00:09:00.668 "base_bdevs_list": [ 00:09:00.668 { 00:09:00.668 "name": "pt1", 00:09:00.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.668 "is_configured": true, 00:09:00.668 "data_offset": 2048, 00:09:00.668 "data_size": 63488 00:09:00.668 }, 00:09:00.668 { 00:09:00.668 "name": "pt2", 00:09:00.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.668 "is_configured": true, 00:09:00.668 "data_offset": 2048, 00:09:00.668 "data_size": 63488 00:09:00.668 } 00:09:00.668 ] 00:09:00.668 } 00:09:00.668 } 00:09:00.668 }' 00:09:00.668 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.668 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:00.668 pt2' 00:09:00.668 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 [2024-11-20 14:26:01.832943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a0beee26-20f0-43de-a2ae-4b490c54ecfc 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a0beee26-20f0-43de-a2ae-4b490c54ecfc ']' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 [2024-11-20 14:26:01.876606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.941 [2024-11-20 14:26:01.876650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.941 [2024-11-20 14:26:01.876757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.941 [2024-11-20 14:26:01.876845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.941 [2024-11-20 14:26:01.876866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:00.941 14:26:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.200 [2024-11-20 14:26:02.012719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:01.200 [2024-11-20 14:26:02.015278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:01.200 [2024-11-20 14:26:02.015372] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:01.200 [2024-11-20 14:26:02.015446] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:01.200 [2024-11-20 14:26:02.015472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.200 [2024-11-20 14:26:02.015487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:01.200 request: 00:09:01.200 { 00:09:01.200 "name": "raid_bdev1", 00:09:01.200 "raid_level": "raid1", 00:09:01.200 "base_bdevs": [ 00:09:01.200 "malloc1", 00:09:01.200 "malloc2" 00:09:01.200 ], 00:09:01.200 "superblock": false, 00:09:01.200 "method": "bdev_raid_create", 00:09:01.200 "req_id": 1 00:09:01.200 } 00:09:01.200 Got JSON-RPC error response 00:09:01.200 response: 00:09:01.200 { 00:09:01.200 "code": -17, 00:09:01.200 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:01.200 } 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.200 [2024-11-20 14:26:02.076689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:01.200 [2024-11-20 14:26:02.076748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.200 [2024-11-20 14:26:02.076787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:01.200 [2024-11-20 14:26:02.076804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.200 [2024-11-20 14:26:02.079670] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.200 [2024-11-20 14:26:02.079727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:01.200 [2024-11-20 14:26:02.079815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:01.200 [2024-11-20 14:26:02.079884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:01.200 pt1 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.200 "name": "raid_bdev1", 00:09:01.200 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:01.200 "strip_size_kb": 0, 00:09:01.200 "state": "configuring", 00:09:01.200 "raid_level": "raid1", 00:09:01.200 "superblock": true, 00:09:01.200 "num_base_bdevs": 2, 00:09:01.200 "num_base_bdevs_discovered": 1, 00:09:01.200 "num_base_bdevs_operational": 2, 00:09:01.200 "base_bdevs_list": [ 00:09:01.200 { 00:09:01.200 "name": "pt1", 00:09:01.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.200 "is_configured": true, 00:09:01.200 "data_offset": 2048, 00:09:01.200 "data_size": 63488 00:09:01.200 }, 00:09:01.200 { 00:09:01.200 "name": null, 00:09:01.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.200 "is_configured": false, 00:09:01.200 "data_offset": 2048, 00:09:01.200 "data_size": 63488 00:09:01.200 } 00:09:01.200 ] 00:09:01.200 }' 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.200 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.767 [2024-11-20 14:26:02.620956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:01.767 [2024-11-20 14:26:02.621048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.767 [2024-11-20 14:26:02.621082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:01.767 [2024-11-20 14:26:02.621100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.767 [2024-11-20 14:26:02.621736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.767 [2024-11-20 14:26:02.621781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:01.767 [2024-11-20 14:26:02.621890] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:01.767 [2024-11-20 14:26:02.621941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:01.767 [2024-11-20 14:26:02.622093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:01.767 [2024-11-20 14:26:02.622123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:01.767 [2024-11-20 14:26:02.622434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:01.767 [2024-11-20 14:26:02.622643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:01.767 [2024-11-20 14:26:02.622667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:01.767 [2024-11-20 14:26:02.622839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.767 pt2 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.767 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.768 "name": "raid_bdev1", 00:09:01.768 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:01.768 "strip_size_kb": 0, 00:09:01.768 "state": "online", 00:09:01.768 "raid_level": "raid1", 00:09:01.768 "superblock": true, 00:09:01.768 "num_base_bdevs": 2, 00:09:01.768 "num_base_bdevs_discovered": 2, 00:09:01.768 "num_base_bdevs_operational": 2, 00:09:01.768 "base_bdevs_list": [ 00:09:01.768 { 00:09:01.768 "name": "pt1", 00:09:01.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.768 "is_configured": true, 00:09:01.768 "data_offset": 2048, 00:09:01.768 "data_size": 63488 00:09:01.768 }, 00:09:01.768 { 00:09:01.768 "name": "pt2", 00:09:01.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.768 "is_configured": true, 00:09:01.768 "data_offset": 2048, 00:09:01.768 "data_size": 63488 00:09:01.768 } 00:09:01.768 ] 00:09:01.768 }' 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.768 14:26:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.334 [2024-11-20 14:26:03.165383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.334 "name": "raid_bdev1", 00:09:02.334 "aliases": [ 00:09:02.334 "a0beee26-20f0-43de-a2ae-4b490c54ecfc" 00:09:02.334 ], 00:09:02.334 "product_name": "Raid Volume", 00:09:02.334 "block_size": 512, 00:09:02.334 "num_blocks": 63488, 00:09:02.334 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:02.334 "assigned_rate_limits": { 00:09:02.334 "rw_ios_per_sec": 0, 00:09:02.334 "rw_mbytes_per_sec": 0, 00:09:02.334 "r_mbytes_per_sec": 0, 00:09:02.334 "w_mbytes_per_sec": 0 00:09:02.334 }, 00:09:02.334 "claimed": false, 00:09:02.334 "zoned": false, 00:09:02.334 "supported_io_types": { 00:09:02.334 "read": true, 00:09:02.334 "write": true, 00:09:02.334 "unmap": false, 00:09:02.334 "flush": false, 00:09:02.334 "reset": true, 00:09:02.334 "nvme_admin": false, 00:09:02.334 "nvme_io": false, 00:09:02.334 "nvme_io_md": false, 00:09:02.334 "write_zeroes": true, 00:09:02.334 "zcopy": false, 00:09:02.334 "get_zone_info": false, 00:09:02.334 "zone_management": false, 00:09:02.334 "zone_append": false, 00:09:02.334 "compare": false, 00:09:02.334 "compare_and_write": false, 00:09:02.334 "abort": false, 00:09:02.334 "seek_hole": false, 00:09:02.334 "seek_data": false, 00:09:02.334 "copy": false, 00:09:02.334 "nvme_iov_md": false 00:09:02.334 }, 00:09:02.334 "memory_domains": [ 00:09:02.334 { 00:09:02.334 "dma_device_id": "system", 00:09:02.334 "dma_device_type": 1 00:09:02.334 }, 00:09:02.334 { 00:09:02.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.334 "dma_device_type": 2 00:09:02.334 }, 00:09:02.334 { 00:09:02.334 "dma_device_id": "system", 00:09:02.334 "dma_device_type": 1 00:09:02.334 }, 00:09:02.334 { 00:09:02.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.334 "dma_device_type": 2 00:09:02.334 } 00:09:02.334 ], 00:09:02.334 "driver_specific": { 00:09:02.334 "raid": { 00:09:02.334 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:02.334 "strip_size_kb": 0, 00:09:02.334 "state": "online", 00:09:02.334 "raid_level": "raid1", 00:09:02.334 "superblock": true, 00:09:02.334 "num_base_bdevs": 2, 00:09:02.334 "num_base_bdevs_discovered": 2, 00:09:02.334 "num_base_bdevs_operational": 2, 00:09:02.334 "base_bdevs_list": [ 00:09:02.334 { 00:09:02.334 "name": "pt1", 00:09:02.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.334 "is_configured": true, 00:09:02.334 "data_offset": 2048, 00:09:02.334 "data_size": 63488 00:09:02.334 }, 00:09:02.334 { 00:09:02.334 "name": "pt2", 00:09:02.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.334 "is_configured": true, 00:09:02.334 "data_offset": 2048, 00:09:02.334 "data_size": 63488 00:09:02.334 } 00:09:02.334 ] 00:09:02.334 } 00:09:02.334 } 00:09:02.334 }' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:02.334 pt2' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.334 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:02.591 [2024-11-20 14:26:03.433440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.591 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a0beee26-20f0-43de-a2ae-4b490c54ecfc '!=' a0beee26-20f0-43de-a2ae-4b490c54ecfc ']' 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.592 [2024-11-20 14:26:03.485213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.592 "name": "raid_bdev1", 00:09:02.592 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:02.592 "strip_size_kb": 0, 00:09:02.592 "state": "online", 00:09:02.592 "raid_level": "raid1", 00:09:02.592 "superblock": true, 00:09:02.592 "num_base_bdevs": 2, 00:09:02.592 "num_base_bdevs_discovered": 1, 00:09:02.592 "num_base_bdevs_operational": 1, 00:09:02.592 "base_bdevs_list": [ 00:09:02.592 { 00:09:02.592 "name": null, 00:09:02.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.592 "is_configured": false, 00:09:02.592 "data_offset": 0, 00:09:02.592 "data_size": 63488 00:09:02.592 }, 00:09:02.592 { 00:09:02.592 "name": "pt2", 00:09:02.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.592 "is_configured": true, 00:09:02.592 "data_offset": 2048, 00:09:02.592 "data_size": 63488 00:09:02.592 } 00:09:02.592 ] 00:09:02.592 }' 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.592 14:26:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 [2024-11-20 14:26:04.033314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.157 [2024-11-20 14:26:04.033364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.157 [2024-11-20 14:26:04.033465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.157 [2024-11-20 14:26:04.033534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.157 [2024-11-20 14:26:04.033553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 [2024-11-20 14:26:04.109264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.157 [2024-11-20 14:26:04.109332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.157 [2024-11-20 14:26:04.109357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:03.157 [2024-11-20 14:26:04.109374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.157 [2024-11-20 14:26:04.112364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.157 [2024-11-20 14:26:04.112415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.157 [2024-11-20 14:26:04.112525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:03.157 [2024-11-20 14:26:04.112587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.157 [2024-11-20 14:26:04.112734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:03.157 [2024-11-20 14:26:04.112758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:03.157 [2024-11-20 14:26:04.113049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:03.157 [2024-11-20 14:26:04.113249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:03.157 [2024-11-20 14:26:04.113276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:03.157 [2024-11-20 14:26:04.113495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.157 pt2 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.157 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.157 "name": "raid_bdev1", 00:09:03.157 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:03.157 "strip_size_kb": 0, 00:09:03.157 "state": "online", 00:09:03.157 "raid_level": "raid1", 00:09:03.157 "superblock": true, 00:09:03.157 "num_base_bdevs": 2, 00:09:03.157 "num_base_bdevs_discovered": 1, 00:09:03.157 "num_base_bdevs_operational": 1, 00:09:03.157 "base_bdevs_list": [ 00:09:03.157 { 00:09:03.157 "name": null, 00:09:03.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.157 "is_configured": false, 00:09:03.157 "data_offset": 2048, 00:09:03.157 "data_size": 63488 00:09:03.157 }, 00:09:03.157 { 00:09:03.157 "name": "pt2", 00:09:03.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.157 "is_configured": true, 00:09:03.157 "data_offset": 2048, 00:09:03.157 "data_size": 63488 00:09:03.158 } 00:09:03.158 ] 00:09:03.158 }' 00:09:03.158 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.158 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 [2024-11-20 14:26:04.661584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.725 [2024-11-20 14:26:04.661649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.725 [2024-11-20 14:26:04.661751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.725 [2024-11-20 14:26:04.661838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.725 [2024-11-20 14:26:04.661853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 [2024-11-20 14:26:04.733614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.725 [2024-11-20 14:26:04.733726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.725 [2024-11-20 14:26:04.733759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:03.725 [2024-11-20 14:26:04.733774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.725 [2024-11-20 14:26:04.736782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.725 [2024-11-20 14:26:04.736826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.725 [2024-11-20 14:26:04.736946] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:03.725 [2024-11-20 14:26:04.737004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:03.725 [2024-11-20 14:26:04.737180] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:03.725 [2024-11-20 14:26:04.737209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:03.725 [2024-11-20 14:26:04.737234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:03.725 [2024-11-20 14:26:04.737305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.725 [2024-11-20 14:26:04.737407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:03.725 [2024-11-20 14:26:04.737422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:03.725 [2024-11-20 14:26:04.737765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:03.725 [2024-11-20 14:26:04.737967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:03.725 [2024-11-20 14:26:04.737988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:03.725 [2024-11-20 14:26:04.738222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.725 pt1 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.725 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.984 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.984 "name": "raid_bdev1", 00:09:03.984 "uuid": "a0beee26-20f0-43de-a2ae-4b490c54ecfc", 00:09:03.984 "strip_size_kb": 0, 00:09:03.984 "state": "online", 00:09:03.984 "raid_level": "raid1", 00:09:03.984 "superblock": true, 00:09:03.984 "num_base_bdevs": 2, 00:09:03.984 "num_base_bdevs_discovered": 1, 00:09:03.984 "num_base_bdevs_operational": 1, 00:09:03.984 "base_bdevs_list": [ 00:09:03.984 { 00:09:03.984 "name": null, 00:09:03.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.984 "is_configured": false, 00:09:03.984 "data_offset": 2048, 00:09:03.984 "data_size": 63488 00:09:03.984 }, 00:09:03.984 { 00:09:03.984 "name": "pt2", 00:09:03.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.984 "is_configured": true, 00:09:03.984 "data_offset": 2048, 00:09:03.984 "data_size": 63488 00:09:03.984 } 00:09:03.984 ] 00:09:03.984 }' 00:09:03.984 14:26:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.984 14:26:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.242 14:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:04.242 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.242 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.242 14:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:04.242 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.500 14:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:04.500 14:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.501 [2024-11-20 14:26:05.334655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a0beee26-20f0-43de-a2ae-4b490c54ecfc '!=' a0beee26-20f0-43de-a2ae-4b490c54ecfc ']' 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63262 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63262 ']' 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63262 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63262 00:09:04.501 killing process with pid 63262 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63262' 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63262 00:09:04.501 [2024-11-20 14:26:05.408305] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.501 14:26:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63262 00:09:04.501 [2024-11-20 14:26:05.408452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.501 [2024-11-20 14:26:05.408526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.501 [2024-11-20 14:26:05.408554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:04.760 [2024-11-20 14:26:05.598465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.694 ************************************ 00:09:05.694 END TEST raid_superblock_test 00:09:05.694 ************************************ 00:09:05.694 14:26:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:05.694 00:09:05.694 real 0m6.853s 00:09:05.694 user 0m10.873s 00:09:05.694 sys 0m0.981s 00:09:05.694 14:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.694 14:26:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.694 14:26:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:05.694 14:26:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:05.694 14:26:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.694 14:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:05.694 ************************************ 00:09:05.694 START TEST raid_read_error_test 00:09:05.694 ************************************ 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GMSNg7GpYe 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63598 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63598 00:09:05.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63598 ']' 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.695 14:26:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.953 [2024-11-20 14:26:06.838087] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:05.954 [2024-11-20 14:26:06.838254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63598 ] 00:09:06.212 [2024-11-20 14:26:07.016098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.212 [2024-11-20 14:26:07.148237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.470 [2024-11-20 14:26:07.357191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.470 [2024-11-20 14:26:07.357270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.037 BaseBdev1_malloc 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.037 true 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.037 [2024-11-20 14:26:07.844496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:07.037 [2024-11-20 14:26:07.844570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.037 [2024-11-20 14:26:07.844601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:07.037 [2024-11-20 14:26:07.844620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.037 [2024-11-20 14:26:07.847502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.037 [2024-11-20 14:26:07.847553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:07.037 BaseBdev1 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.037 BaseBdev2_malloc 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.037 true 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.037 [2024-11-20 14:26:07.901085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:07.037 [2024-11-20 14:26:07.901170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.037 [2024-11-20 14:26:07.901197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:07.037 [2024-11-20 14:26:07.901216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.037 [2024-11-20 14:26:07.904060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.037 [2024-11-20 14:26:07.904110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:07.037 BaseBdev2 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.037 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.037 [2024-11-20 14:26:07.909161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.037 [2024-11-20 14:26:07.911730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.037 [2024-11-20 14:26:07.912010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:07.037 [2024-11-20 14:26:07.912034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.037 [2024-11-20 14:26:07.912344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:07.037 [2024-11-20 14:26:07.912573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:07.037 [2024-11-20 14:26:07.912590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:07.037 [2024-11-20 14:26:07.912799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.038 "name": "raid_bdev1", 00:09:07.038 "uuid": "8d405d06-9ae3-4197-9973-ae2c1d39efc1", 00:09:07.038 "strip_size_kb": 0, 00:09:07.038 "state": "online", 00:09:07.038 "raid_level": "raid1", 00:09:07.038 "superblock": true, 00:09:07.038 "num_base_bdevs": 2, 00:09:07.038 "num_base_bdevs_discovered": 2, 00:09:07.038 "num_base_bdevs_operational": 2, 00:09:07.038 "base_bdevs_list": [ 00:09:07.038 { 00:09:07.038 "name": "BaseBdev1", 00:09:07.038 "uuid": "3105b2e4-1de4-5794-9f2b-e013be2c82e8", 00:09:07.038 "is_configured": true, 00:09:07.038 "data_offset": 2048, 00:09:07.038 "data_size": 63488 00:09:07.038 }, 00:09:07.038 { 00:09:07.038 "name": "BaseBdev2", 00:09:07.038 "uuid": "c5e09860-66be-5f76-a9f8-0db324e1d7da", 00:09:07.038 "is_configured": true, 00:09:07.038 "data_offset": 2048, 00:09:07.038 "data_size": 63488 00:09:07.038 } 00:09:07.038 ] 00:09:07.038 }' 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.038 14:26:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.604 14:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:07.604 14:26:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:07.604 [2024-11-20 14:26:08.554767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.540 "name": "raid_bdev1", 00:09:08.540 "uuid": "8d405d06-9ae3-4197-9973-ae2c1d39efc1", 00:09:08.540 "strip_size_kb": 0, 00:09:08.540 "state": "online", 00:09:08.540 "raid_level": "raid1", 00:09:08.540 "superblock": true, 00:09:08.540 "num_base_bdevs": 2, 00:09:08.540 "num_base_bdevs_discovered": 2, 00:09:08.540 "num_base_bdevs_operational": 2, 00:09:08.540 "base_bdevs_list": [ 00:09:08.540 { 00:09:08.540 "name": "BaseBdev1", 00:09:08.540 "uuid": "3105b2e4-1de4-5794-9f2b-e013be2c82e8", 00:09:08.540 "is_configured": true, 00:09:08.540 "data_offset": 2048, 00:09:08.540 "data_size": 63488 00:09:08.540 }, 00:09:08.540 { 00:09:08.540 "name": "BaseBdev2", 00:09:08.540 "uuid": "c5e09860-66be-5f76-a9f8-0db324e1d7da", 00:09:08.540 "is_configured": true, 00:09:08.540 "data_offset": 2048, 00:09:08.540 "data_size": 63488 00:09:08.540 } 00:09:08.540 ] 00:09:08.540 }' 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.540 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.107 [2024-11-20 14:26:09.952446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.107 [2024-11-20 14:26:09.952492] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.107 { 00:09:09.107 "results": [ 00:09:09.107 { 00:09:09.107 "job": "raid_bdev1", 00:09:09.107 "core_mask": "0x1", 00:09:09.107 "workload": "randrw", 00:09:09.107 "percentage": 50, 00:09:09.107 "status": "finished", 00:09:09.107 "queue_depth": 1, 00:09:09.107 "io_size": 131072, 00:09:09.107 "runtime": 1.395065, 00:09:09.107 "iops": 12827.359298670672, 00:09:09.107 "mibps": 1603.419912333834, 00:09:09.107 "io_failed": 0, 00:09:09.107 "io_timeout": 0, 00:09:09.107 "avg_latency_us": 73.69146201325917, 00:09:09.107 "min_latency_us": 42.589090909090906, 00:09:09.107 "max_latency_us": 1869.2654545454545 00:09:09.107 } 00:09:09.107 ], 00:09:09.107 "core_count": 1 00:09:09.107 } 00:09:09.107 [2024-11-20 14:26:09.955818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.107 [2024-11-20 14:26:09.955882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.107 [2024-11-20 14:26:09.955992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.107 [2024-11-20 14:26:09.956013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63598 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63598 ']' 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63598 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63598 00:09:09.107 killing process with pid 63598 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63598' 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63598 00:09:09.107 [2024-11-20 14:26:09.996185] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.107 14:26:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63598 00:09:09.107 [2024-11-20 14:26:10.121744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GMSNg7GpYe 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:10.482 ************************************ 00:09:10.482 END TEST raid_read_error_test 00:09:10.482 ************************************ 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:10.482 00:09:10.482 real 0m4.562s 00:09:10.482 user 0m5.670s 00:09:10.482 sys 0m0.578s 00:09:10.482 14:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.483 14:26:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.483 14:26:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:10.483 14:26:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:10.483 14:26:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.483 14:26:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.483 ************************************ 00:09:10.483 START TEST raid_write_error_test 00:09:10.483 ************************************ 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VvoQP3fWOz 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63743 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63743 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63743 ']' 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.483 14:26:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.483 [2024-11-20 14:26:11.450529] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:10.483 [2024-11-20 14:26:11.450987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63743 ] 00:09:10.742 [2024-11-20 14:26:11.632444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.742 [2024-11-20 14:26:11.765919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.001 [2024-11-20 14:26:11.975679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.001 [2024-11-20 14:26:11.975757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.567 BaseBdev1_malloc 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.567 true 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.567 [2024-11-20 14:26:12.465628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:11.567 [2024-11-20 14:26:12.465736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.567 [2024-11-20 14:26:12.465767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:11.567 [2024-11-20 14:26:12.465784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.567 [2024-11-20 14:26:12.469008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.567 [2024-11-20 14:26:12.469175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:11.567 BaseBdev1 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.567 BaseBdev2_malloc 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.567 true 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.567 [2024-11-20 14:26:12.531513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:11.567 [2024-11-20 14:26:12.531581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.567 [2024-11-20 14:26:12.531608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:11.567 [2024-11-20 14:26:12.531645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.567 [2024-11-20 14:26:12.534609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.567 [2024-11-20 14:26:12.534837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:11.567 BaseBdev2 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.567 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.567 [2024-11-20 14:26:12.539672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.567 [2024-11-20 14:26:12.542430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.567 [2024-11-20 14:26:12.542856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.567 [2024-11-20 14:26:12.542984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:11.568 [2024-11-20 14:26:12.543405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:11.568 [2024-11-20 14:26:12.543786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.568 [2024-11-20 14:26:12.543911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:11.568 [2024-11-20 14:26:12.544329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.568 "name": "raid_bdev1", 00:09:11.568 "uuid": "3fe62b02-6348-470e-bc1d-251a35d82e33", 00:09:11.568 "strip_size_kb": 0, 00:09:11.568 "state": "online", 00:09:11.568 "raid_level": "raid1", 00:09:11.568 "superblock": true, 00:09:11.568 "num_base_bdevs": 2, 00:09:11.568 "num_base_bdevs_discovered": 2, 00:09:11.568 "num_base_bdevs_operational": 2, 00:09:11.568 "base_bdevs_list": [ 00:09:11.568 { 00:09:11.568 "name": "BaseBdev1", 00:09:11.568 "uuid": "569c2b59-5e52-5dc1-bff1-cbbd9bdff19e", 00:09:11.568 "is_configured": true, 00:09:11.568 "data_offset": 2048, 00:09:11.568 "data_size": 63488 00:09:11.568 }, 00:09:11.568 { 00:09:11.568 "name": "BaseBdev2", 00:09:11.568 "uuid": "96a196d1-3f24-58f2-8048-450afe8ebb53", 00:09:11.568 "is_configured": true, 00:09:11.568 "data_offset": 2048, 00:09:11.568 "data_size": 63488 00:09:11.568 } 00:09:11.568 ] 00:09:11.568 }' 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.568 14:26:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.138 14:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:12.138 14:26:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:12.418 [2024-11-20 14:26:13.201909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.352 [2024-11-20 14:26:14.074410] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:13.352 [2024-11-20 14:26:14.074621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.352 [2024-11-20 14:26:14.074887] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.352 "name": "raid_bdev1", 00:09:13.352 "uuid": "3fe62b02-6348-470e-bc1d-251a35d82e33", 00:09:13.352 "strip_size_kb": 0, 00:09:13.352 "state": "online", 00:09:13.352 "raid_level": "raid1", 00:09:13.352 "superblock": true, 00:09:13.352 "num_base_bdevs": 2, 00:09:13.352 "num_base_bdevs_discovered": 1, 00:09:13.352 "num_base_bdevs_operational": 1, 00:09:13.352 "base_bdevs_list": [ 00:09:13.352 { 00:09:13.352 "name": null, 00:09:13.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.352 "is_configured": false, 00:09:13.352 "data_offset": 0, 00:09:13.352 "data_size": 63488 00:09:13.352 }, 00:09:13.352 { 00:09:13.352 "name": "BaseBdev2", 00:09:13.352 "uuid": "96a196d1-3f24-58f2-8048-450afe8ebb53", 00:09:13.352 "is_configured": true, 00:09:13.352 "data_offset": 2048, 00:09:13.352 "data_size": 63488 00:09:13.352 } 00:09:13.352 ] 00:09:13.352 }' 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.352 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.612 [2024-11-20 14:26:14.605526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.612 [2024-11-20 14:26:14.605729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.612 [2024-11-20 14:26:14.609118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.612 [2024-11-20 14:26:14.609292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.612 [2024-11-20 14:26:14.609392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.612 [2024-11-20 14:26:14.609412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:13.612 { 00:09:13.612 "results": [ 00:09:13.612 { 00:09:13.612 "job": "raid_bdev1", 00:09:13.612 "core_mask": "0x1", 00:09:13.612 "workload": "randrw", 00:09:13.612 "percentage": 50, 00:09:13.612 "status": "finished", 00:09:13.612 "queue_depth": 1, 00:09:13.612 "io_size": 131072, 00:09:13.612 "runtime": 1.401226, 00:09:13.612 "iops": 15357.26570874363, 00:09:13.612 "mibps": 1919.6582135929536, 00:09:13.612 "io_failed": 0, 00:09:13.612 "io_timeout": 0, 00:09:13.612 "avg_latency_us": 60.98583627998936, 00:09:13.612 "min_latency_us": 40.261818181818185, 00:09:13.612 "max_latency_us": 1794.7927272727272 00:09:13.612 } 00:09:13.612 ], 00:09:13.612 "core_count": 1 00:09:13.612 } 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63743 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63743 ']' 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63743 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63743 00:09:13.612 killing process with pid 63743 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63743' 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63743 00:09:13.612 [2024-11-20 14:26:14.647120] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.612 14:26:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63743 00:09:13.871 [2024-11-20 14:26:14.767984] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VvoQP3fWOz 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:15.245 00:09:15.245 real 0m4.567s 00:09:15.245 user 0m5.716s 00:09:15.245 sys 0m0.558s 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.245 ************************************ 00:09:15.245 END TEST raid_write_error_test 00:09:15.245 ************************************ 00:09:15.245 14:26:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.245 14:26:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:15.245 14:26:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:15.245 14:26:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:15.245 14:26:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:15.245 14:26:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.245 14:26:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.245 ************************************ 00:09:15.245 START TEST raid_state_function_test 00:09:15.245 ************************************ 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:15.245 Process raid pid: 63888 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63888 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63888' 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63888 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63888 ']' 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.245 14:26:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.245 [2024-11-20 14:26:16.068687] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:15.245 [2024-11-20 14:26:16.069112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.245 [2024-11-20 14:26:16.254677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.503 [2024-11-20 14:26:16.390056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.761 [2024-11-20 14:26:16.603646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.761 [2024-11-20 14:26:16.603713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.019 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.019 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:16.019 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.019 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.019 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.277 [2024-11-20 14:26:17.074130] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.277 [2024-11-20 14:26:17.074208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.277 [2024-11-20 14:26:17.074226] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.277 [2024-11-20 14:26:17.074243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.277 [2024-11-20 14:26:17.074253] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.277 [2024-11-20 14:26:17.074268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.277 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.277 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.278 "name": "Existed_Raid", 00:09:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.278 "strip_size_kb": 64, 00:09:16.278 "state": "configuring", 00:09:16.278 "raid_level": "raid0", 00:09:16.278 "superblock": false, 00:09:16.278 "num_base_bdevs": 3, 00:09:16.278 "num_base_bdevs_discovered": 0, 00:09:16.278 "num_base_bdevs_operational": 3, 00:09:16.278 "base_bdevs_list": [ 00:09:16.278 { 00:09:16.278 "name": "BaseBdev1", 00:09:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.278 "is_configured": false, 00:09:16.278 "data_offset": 0, 00:09:16.278 "data_size": 0 00:09:16.278 }, 00:09:16.278 { 00:09:16.278 "name": "BaseBdev2", 00:09:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.278 "is_configured": false, 00:09:16.278 "data_offset": 0, 00:09:16.278 "data_size": 0 00:09:16.278 }, 00:09:16.278 { 00:09:16.278 "name": "BaseBdev3", 00:09:16.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.278 "is_configured": false, 00:09:16.278 "data_offset": 0, 00:09:16.278 "data_size": 0 00:09:16.278 } 00:09:16.278 ] 00:09:16.278 }' 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.278 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.536 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.536 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.536 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.536 [2024-11-20 14:26:17.578244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.536 [2024-11-20 14:26:17.578291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:16.536 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.536 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.536 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.536 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.536 [2024-11-20 14:26:17.586199] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.536 [2024-11-20 14:26:17.586258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.536 [2024-11-20 14:26:17.586274] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.536 [2024-11-20 14:26:17.586290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.536 [2024-11-20 14:26:17.586300] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.536 [2024-11-20 14:26:17.586315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.794 [2024-11-20 14:26:17.631691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.794 BaseBdev1 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.794 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.794 [ 00:09:16.794 { 00:09:16.794 "name": "BaseBdev1", 00:09:16.794 "aliases": [ 00:09:16.794 "d6f75e21-e4db-4eeb-86a0-fbf8419bb2d5" 00:09:16.794 ], 00:09:16.794 "product_name": "Malloc disk", 00:09:16.794 "block_size": 512, 00:09:16.794 "num_blocks": 65536, 00:09:16.794 "uuid": "d6f75e21-e4db-4eeb-86a0-fbf8419bb2d5", 00:09:16.794 "assigned_rate_limits": { 00:09:16.794 "rw_ios_per_sec": 0, 00:09:16.794 "rw_mbytes_per_sec": 0, 00:09:16.794 "r_mbytes_per_sec": 0, 00:09:16.794 "w_mbytes_per_sec": 0 00:09:16.794 }, 00:09:16.794 "claimed": true, 00:09:16.794 "claim_type": "exclusive_write", 00:09:16.794 "zoned": false, 00:09:16.794 "supported_io_types": { 00:09:16.794 "read": true, 00:09:16.794 "write": true, 00:09:16.794 "unmap": true, 00:09:16.794 "flush": true, 00:09:16.794 "reset": true, 00:09:16.794 "nvme_admin": false, 00:09:16.794 "nvme_io": false, 00:09:16.794 "nvme_io_md": false, 00:09:16.794 "write_zeroes": true, 00:09:16.794 "zcopy": true, 00:09:16.794 "get_zone_info": false, 00:09:16.794 "zone_management": false, 00:09:16.794 "zone_append": false, 00:09:16.794 "compare": false, 00:09:16.794 "compare_and_write": false, 00:09:16.794 "abort": true, 00:09:16.795 "seek_hole": false, 00:09:16.795 "seek_data": false, 00:09:16.795 "copy": true, 00:09:16.795 "nvme_iov_md": false 00:09:16.795 }, 00:09:16.795 "memory_domains": [ 00:09:16.795 { 00:09:16.795 "dma_device_id": "system", 00:09:16.795 "dma_device_type": 1 00:09:16.795 }, 00:09:16.795 { 00:09:16.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.795 "dma_device_type": 2 00:09:16.795 } 00:09:16.795 ], 00:09:16.795 "driver_specific": {} 00:09:16.795 } 00:09:16.795 ] 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.795 "name": "Existed_Raid", 00:09:16.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.795 "strip_size_kb": 64, 00:09:16.795 "state": "configuring", 00:09:16.795 "raid_level": "raid0", 00:09:16.795 "superblock": false, 00:09:16.795 "num_base_bdevs": 3, 00:09:16.795 "num_base_bdevs_discovered": 1, 00:09:16.795 "num_base_bdevs_operational": 3, 00:09:16.795 "base_bdevs_list": [ 00:09:16.795 { 00:09:16.795 "name": "BaseBdev1", 00:09:16.795 "uuid": "d6f75e21-e4db-4eeb-86a0-fbf8419bb2d5", 00:09:16.795 "is_configured": true, 00:09:16.795 "data_offset": 0, 00:09:16.795 "data_size": 65536 00:09:16.795 }, 00:09:16.795 { 00:09:16.795 "name": "BaseBdev2", 00:09:16.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.795 "is_configured": false, 00:09:16.795 "data_offset": 0, 00:09:16.795 "data_size": 0 00:09:16.795 }, 00:09:16.795 { 00:09:16.795 "name": "BaseBdev3", 00:09:16.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.795 "is_configured": false, 00:09:16.795 "data_offset": 0, 00:09:16.795 "data_size": 0 00:09:16.795 } 00:09:16.795 ] 00:09:16.795 }' 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.795 14:26:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.362 [2024-11-20 14:26:18.171893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.362 [2024-11-20 14:26:18.171991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.362 [2024-11-20 14:26:18.179933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.362 [2024-11-20 14:26:18.182561] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.362 [2024-11-20 14:26:18.182615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.362 [2024-11-20 14:26:18.182848] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.362 [2024-11-20 14:26:18.183013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.362 "name": "Existed_Raid", 00:09:17.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.362 "strip_size_kb": 64, 00:09:17.362 "state": "configuring", 00:09:17.362 "raid_level": "raid0", 00:09:17.362 "superblock": false, 00:09:17.362 "num_base_bdevs": 3, 00:09:17.362 "num_base_bdevs_discovered": 1, 00:09:17.362 "num_base_bdevs_operational": 3, 00:09:17.362 "base_bdevs_list": [ 00:09:17.362 { 00:09:17.362 "name": "BaseBdev1", 00:09:17.362 "uuid": "d6f75e21-e4db-4eeb-86a0-fbf8419bb2d5", 00:09:17.362 "is_configured": true, 00:09:17.362 "data_offset": 0, 00:09:17.362 "data_size": 65536 00:09:17.362 }, 00:09:17.362 { 00:09:17.362 "name": "BaseBdev2", 00:09:17.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.362 "is_configured": false, 00:09:17.362 "data_offset": 0, 00:09:17.362 "data_size": 0 00:09:17.362 }, 00:09:17.362 { 00:09:17.362 "name": "BaseBdev3", 00:09:17.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.362 "is_configured": false, 00:09:17.362 "data_offset": 0, 00:09:17.362 "data_size": 0 00:09:17.362 } 00:09:17.362 ] 00:09:17.362 }' 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.362 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.931 [2024-11-20 14:26:18.779227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.931 BaseBdev2 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.931 [ 00:09:17.931 { 00:09:17.931 "name": "BaseBdev2", 00:09:17.931 "aliases": [ 00:09:17.931 "cf5769ea-bec8-41a9-b086-42b40305cf60" 00:09:17.931 ], 00:09:17.931 "product_name": "Malloc disk", 00:09:17.931 "block_size": 512, 00:09:17.931 "num_blocks": 65536, 00:09:17.931 "uuid": "cf5769ea-bec8-41a9-b086-42b40305cf60", 00:09:17.931 "assigned_rate_limits": { 00:09:17.931 "rw_ios_per_sec": 0, 00:09:17.931 "rw_mbytes_per_sec": 0, 00:09:17.931 "r_mbytes_per_sec": 0, 00:09:17.931 "w_mbytes_per_sec": 0 00:09:17.931 }, 00:09:17.931 "claimed": true, 00:09:17.931 "claim_type": "exclusive_write", 00:09:17.931 "zoned": false, 00:09:17.931 "supported_io_types": { 00:09:17.931 "read": true, 00:09:17.931 "write": true, 00:09:17.931 "unmap": true, 00:09:17.931 "flush": true, 00:09:17.931 "reset": true, 00:09:17.931 "nvme_admin": false, 00:09:17.931 "nvme_io": false, 00:09:17.931 "nvme_io_md": false, 00:09:17.931 "write_zeroes": true, 00:09:17.931 "zcopy": true, 00:09:17.931 "get_zone_info": false, 00:09:17.931 "zone_management": false, 00:09:17.931 "zone_append": false, 00:09:17.931 "compare": false, 00:09:17.931 "compare_and_write": false, 00:09:17.931 "abort": true, 00:09:17.931 "seek_hole": false, 00:09:17.931 "seek_data": false, 00:09:17.931 "copy": true, 00:09:17.931 "nvme_iov_md": false 00:09:17.931 }, 00:09:17.931 "memory_domains": [ 00:09:17.931 { 00:09:17.931 "dma_device_id": "system", 00:09:17.931 "dma_device_type": 1 00:09:17.931 }, 00:09:17.931 { 00:09:17.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.931 "dma_device_type": 2 00:09:17.931 } 00:09:17.931 ], 00:09:17.931 "driver_specific": {} 00:09:17.931 } 00:09:17.931 ] 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.931 "name": "Existed_Raid", 00:09:17.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.931 "strip_size_kb": 64, 00:09:17.931 "state": "configuring", 00:09:17.931 "raid_level": "raid0", 00:09:17.931 "superblock": false, 00:09:17.931 "num_base_bdevs": 3, 00:09:17.931 "num_base_bdevs_discovered": 2, 00:09:17.931 "num_base_bdevs_operational": 3, 00:09:17.931 "base_bdevs_list": [ 00:09:17.931 { 00:09:17.931 "name": "BaseBdev1", 00:09:17.931 "uuid": "d6f75e21-e4db-4eeb-86a0-fbf8419bb2d5", 00:09:17.931 "is_configured": true, 00:09:17.931 "data_offset": 0, 00:09:17.931 "data_size": 65536 00:09:17.931 }, 00:09:17.931 { 00:09:17.931 "name": "BaseBdev2", 00:09:17.931 "uuid": "cf5769ea-bec8-41a9-b086-42b40305cf60", 00:09:17.931 "is_configured": true, 00:09:17.931 "data_offset": 0, 00:09:17.931 "data_size": 65536 00:09:17.931 }, 00:09:17.931 { 00:09:17.931 "name": "BaseBdev3", 00:09:17.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.931 "is_configured": false, 00:09:17.931 "data_offset": 0, 00:09:17.931 "data_size": 0 00:09:17.931 } 00:09:17.931 ] 00:09:17.931 }' 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.931 14:26:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.504 [2024-11-20 14:26:19.378610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.504 [2024-11-20 14:26:19.378934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.504 [2024-11-20 14:26:19.378983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:18.504 [2024-11-20 14:26:19.379414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:18.504 [2024-11-20 14:26:19.379672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.504 [2024-11-20 14:26:19.379691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:18.504 [2024-11-20 14:26:19.380017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.504 BaseBdev3 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.504 [ 00:09:18.504 { 00:09:18.504 "name": "BaseBdev3", 00:09:18.504 "aliases": [ 00:09:18.504 "e18a3628-d0bb-4dd2-9acc-3daca22d11ae" 00:09:18.504 ], 00:09:18.504 "product_name": "Malloc disk", 00:09:18.504 "block_size": 512, 00:09:18.504 "num_blocks": 65536, 00:09:18.504 "uuid": "e18a3628-d0bb-4dd2-9acc-3daca22d11ae", 00:09:18.504 "assigned_rate_limits": { 00:09:18.504 "rw_ios_per_sec": 0, 00:09:18.504 "rw_mbytes_per_sec": 0, 00:09:18.504 "r_mbytes_per_sec": 0, 00:09:18.504 "w_mbytes_per_sec": 0 00:09:18.504 }, 00:09:18.504 "claimed": true, 00:09:18.504 "claim_type": "exclusive_write", 00:09:18.504 "zoned": false, 00:09:18.504 "supported_io_types": { 00:09:18.504 "read": true, 00:09:18.504 "write": true, 00:09:18.504 "unmap": true, 00:09:18.504 "flush": true, 00:09:18.504 "reset": true, 00:09:18.504 "nvme_admin": false, 00:09:18.504 "nvme_io": false, 00:09:18.504 "nvme_io_md": false, 00:09:18.504 "write_zeroes": true, 00:09:18.504 "zcopy": true, 00:09:18.504 "get_zone_info": false, 00:09:18.504 "zone_management": false, 00:09:18.504 "zone_append": false, 00:09:18.504 "compare": false, 00:09:18.504 "compare_and_write": false, 00:09:18.504 "abort": true, 00:09:18.504 "seek_hole": false, 00:09:18.504 "seek_data": false, 00:09:18.504 "copy": true, 00:09:18.504 "nvme_iov_md": false 00:09:18.504 }, 00:09:18.504 "memory_domains": [ 00:09:18.504 { 00:09:18.504 "dma_device_id": "system", 00:09:18.504 "dma_device_type": 1 00:09:18.504 }, 00:09:18.504 { 00:09:18.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.504 "dma_device_type": 2 00:09:18.504 } 00:09:18.504 ], 00:09:18.504 "driver_specific": {} 00:09:18.504 } 00:09:18.504 ] 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.504 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.504 "name": "Existed_Raid", 00:09:18.505 "uuid": "9cf2f462-b4e9-4568-85e8-2b0ae3463afe", 00:09:18.505 "strip_size_kb": 64, 00:09:18.505 "state": "online", 00:09:18.505 "raid_level": "raid0", 00:09:18.505 "superblock": false, 00:09:18.505 "num_base_bdevs": 3, 00:09:18.505 "num_base_bdevs_discovered": 3, 00:09:18.505 "num_base_bdevs_operational": 3, 00:09:18.505 "base_bdevs_list": [ 00:09:18.505 { 00:09:18.505 "name": "BaseBdev1", 00:09:18.505 "uuid": "d6f75e21-e4db-4eeb-86a0-fbf8419bb2d5", 00:09:18.505 "is_configured": true, 00:09:18.505 "data_offset": 0, 00:09:18.505 "data_size": 65536 00:09:18.505 }, 00:09:18.505 { 00:09:18.505 "name": "BaseBdev2", 00:09:18.505 "uuid": "cf5769ea-bec8-41a9-b086-42b40305cf60", 00:09:18.505 "is_configured": true, 00:09:18.505 "data_offset": 0, 00:09:18.505 "data_size": 65536 00:09:18.505 }, 00:09:18.505 { 00:09:18.505 "name": "BaseBdev3", 00:09:18.505 "uuid": "e18a3628-d0bb-4dd2-9acc-3daca22d11ae", 00:09:18.505 "is_configured": true, 00:09:18.505 "data_offset": 0, 00:09:18.505 "data_size": 65536 00:09:18.505 } 00:09:18.505 ] 00:09:18.505 }' 00:09:18.505 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.505 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.072 [2024-11-20 14:26:19.927230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.072 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.072 "name": "Existed_Raid", 00:09:19.072 "aliases": [ 00:09:19.072 "9cf2f462-b4e9-4568-85e8-2b0ae3463afe" 00:09:19.072 ], 00:09:19.072 "product_name": "Raid Volume", 00:09:19.072 "block_size": 512, 00:09:19.072 "num_blocks": 196608, 00:09:19.072 "uuid": "9cf2f462-b4e9-4568-85e8-2b0ae3463afe", 00:09:19.072 "assigned_rate_limits": { 00:09:19.072 "rw_ios_per_sec": 0, 00:09:19.072 "rw_mbytes_per_sec": 0, 00:09:19.072 "r_mbytes_per_sec": 0, 00:09:19.072 "w_mbytes_per_sec": 0 00:09:19.072 }, 00:09:19.072 "claimed": false, 00:09:19.072 "zoned": false, 00:09:19.072 "supported_io_types": { 00:09:19.072 "read": true, 00:09:19.072 "write": true, 00:09:19.072 "unmap": true, 00:09:19.072 "flush": true, 00:09:19.072 "reset": true, 00:09:19.072 "nvme_admin": false, 00:09:19.072 "nvme_io": false, 00:09:19.072 "nvme_io_md": false, 00:09:19.072 "write_zeroes": true, 00:09:19.072 "zcopy": false, 00:09:19.072 "get_zone_info": false, 00:09:19.072 "zone_management": false, 00:09:19.072 "zone_append": false, 00:09:19.072 "compare": false, 00:09:19.072 "compare_and_write": false, 00:09:19.072 "abort": false, 00:09:19.072 "seek_hole": false, 00:09:19.072 "seek_data": false, 00:09:19.072 "copy": false, 00:09:19.072 "nvme_iov_md": false 00:09:19.072 }, 00:09:19.072 "memory_domains": [ 00:09:19.072 { 00:09:19.072 "dma_device_id": "system", 00:09:19.072 "dma_device_type": 1 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.072 "dma_device_type": 2 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "dma_device_id": "system", 00:09:19.072 "dma_device_type": 1 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.072 "dma_device_type": 2 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "dma_device_id": "system", 00:09:19.072 "dma_device_type": 1 00:09:19.072 }, 00:09:19.072 { 00:09:19.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.072 "dma_device_type": 2 00:09:19.072 } 00:09:19.072 ], 00:09:19.072 "driver_specific": { 00:09:19.072 "raid": { 00:09:19.072 "uuid": "9cf2f462-b4e9-4568-85e8-2b0ae3463afe", 00:09:19.072 "strip_size_kb": 64, 00:09:19.072 "state": "online", 00:09:19.072 "raid_level": "raid0", 00:09:19.072 "superblock": false, 00:09:19.072 "num_base_bdevs": 3, 00:09:19.072 "num_base_bdevs_discovered": 3, 00:09:19.072 "num_base_bdevs_operational": 3, 00:09:19.072 "base_bdevs_list": [ 00:09:19.072 { 00:09:19.072 "name": "BaseBdev1", 00:09:19.072 "uuid": "d6f75e21-e4db-4eeb-86a0-fbf8419bb2d5", 00:09:19.072 "is_configured": true, 00:09:19.073 "data_offset": 0, 00:09:19.073 "data_size": 65536 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "name": "BaseBdev2", 00:09:19.073 "uuid": "cf5769ea-bec8-41a9-b086-42b40305cf60", 00:09:19.073 "is_configured": true, 00:09:19.073 "data_offset": 0, 00:09:19.073 "data_size": 65536 00:09:19.073 }, 00:09:19.073 { 00:09:19.073 "name": "BaseBdev3", 00:09:19.073 "uuid": "e18a3628-d0bb-4dd2-9acc-3daca22d11ae", 00:09:19.073 "is_configured": true, 00:09:19.073 "data_offset": 0, 00:09:19.073 "data_size": 65536 00:09:19.073 } 00:09:19.073 ] 00:09:19.073 } 00:09:19.073 } 00:09:19.073 }' 00:09:19.073 14:26:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:19.073 BaseBdev2 00:09:19.073 BaseBdev3' 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.073 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.331 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.332 [2024-11-20 14:26:20.238916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.332 [2024-11-20 14:26:20.238952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.332 [2024-11-20 14:26:20.239024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.332 "name": "Existed_Raid", 00:09:19.332 "uuid": "9cf2f462-b4e9-4568-85e8-2b0ae3463afe", 00:09:19.332 "strip_size_kb": 64, 00:09:19.332 "state": "offline", 00:09:19.332 "raid_level": "raid0", 00:09:19.332 "superblock": false, 00:09:19.332 "num_base_bdevs": 3, 00:09:19.332 "num_base_bdevs_discovered": 2, 00:09:19.332 "num_base_bdevs_operational": 2, 00:09:19.332 "base_bdevs_list": [ 00:09:19.332 { 00:09:19.332 "name": null, 00:09:19.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.332 "is_configured": false, 00:09:19.332 "data_offset": 0, 00:09:19.332 "data_size": 65536 00:09:19.332 }, 00:09:19.332 { 00:09:19.332 "name": "BaseBdev2", 00:09:19.332 "uuid": "cf5769ea-bec8-41a9-b086-42b40305cf60", 00:09:19.332 "is_configured": true, 00:09:19.332 "data_offset": 0, 00:09:19.332 "data_size": 65536 00:09:19.332 }, 00:09:19.332 { 00:09:19.332 "name": "BaseBdev3", 00:09:19.332 "uuid": "e18a3628-d0bb-4dd2-9acc-3daca22d11ae", 00:09:19.332 "is_configured": true, 00:09:19.332 "data_offset": 0, 00:09:19.332 "data_size": 65536 00:09:19.332 } 00:09:19.332 ] 00:09:19.332 }' 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.332 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:19.899 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:19.900 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.900 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.900 [2024-11-20 14:26:20.898803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.158 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.158 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.158 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.158 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.158 14:26:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.158 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.158 14:26:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.158 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.158 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.158 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.158 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.159 [2024-11-20 14:26:21.040482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:20.159 [2024-11-20 14:26:21.040565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.159 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.418 BaseBdev2 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.418 [ 00:09:20.418 { 00:09:20.418 "name": "BaseBdev2", 00:09:20.418 "aliases": [ 00:09:20.418 "e7570f99-d69e-4987-abb0-8f017f746e98" 00:09:20.418 ], 00:09:20.418 "product_name": "Malloc disk", 00:09:20.418 "block_size": 512, 00:09:20.418 "num_blocks": 65536, 00:09:20.418 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:20.418 "assigned_rate_limits": { 00:09:20.418 "rw_ios_per_sec": 0, 00:09:20.418 "rw_mbytes_per_sec": 0, 00:09:20.418 "r_mbytes_per_sec": 0, 00:09:20.418 "w_mbytes_per_sec": 0 00:09:20.418 }, 00:09:20.418 "claimed": false, 00:09:20.418 "zoned": false, 00:09:20.418 "supported_io_types": { 00:09:20.418 "read": true, 00:09:20.418 "write": true, 00:09:20.418 "unmap": true, 00:09:20.418 "flush": true, 00:09:20.418 "reset": true, 00:09:20.418 "nvme_admin": false, 00:09:20.418 "nvme_io": false, 00:09:20.418 "nvme_io_md": false, 00:09:20.418 "write_zeroes": true, 00:09:20.418 "zcopy": true, 00:09:20.418 "get_zone_info": false, 00:09:20.418 "zone_management": false, 00:09:20.418 "zone_append": false, 00:09:20.418 "compare": false, 00:09:20.418 "compare_and_write": false, 00:09:20.418 "abort": true, 00:09:20.418 "seek_hole": false, 00:09:20.418 "seek_data": false, 00:09:20.418 "copy": true, 00:09:20.418 "nvme_iov_md": false 00:09:20.418 }, 00:09:20.418 "memory_domains": [ 00:09:20.418 { 00:09:20.418 "dma_device_id": "system", 00:09:20.418 "dma_device_type": 1 00:09:20.418 }, 00:09:20.418 { 00:09:20.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.418 "dma_device_type": 2 00:09:20.418 } 00:09:20.418 ], 00:09:20.418 "driver_specific": {} 00:09:20.418 } 00:09:20.418 ] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.418 BaseBdev3 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.418 [ 00:09:20.418 { 00:09:20.418 "name": "BaseBdev3", 00:09:20.418 "aliases": [ 00:09:20.418 "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8" 00:09:20.418 ], 00:09:20.418 "product_name": "Malloc disk", 00:09:20.418 "block_size": 512, 00:09:20.418 "num_blocks": 65536, 00:09:20.418 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:20.418 "assigned_rate_limits": { 00:09:20.418 "rw_ios_per_sec": 0, 00:09:20.418 "rw_mbytes_per_sec": 0, 00:09:20.418 "r_mbytes_per_sec": 0, 00:09:20.418 "w_mbytes_per_sec": 0 00:09:20.418 }, 00:09:20.418 "claimed": false, 00:09:20.418 "zoned": false, 00:09:20.418 "supported_io_types": { 00:09:20.418 "read": true, 00:09:20.418 "write": true, 00:09:20.418 "unmap": true, 00:09:20.418 "flush": true, 00:09:20.418 "reset": true, 00:09:20.418 "nvme_admin": false, 00:09:20.418 "nvme_io": false, 00:09:20.418 "nvme_io_md": false, 00:09:20.418 "write_zeroes": true, 00:09:20.418 "zcopy": true, 00:09:20.418 "get_zone_info": false, 00:09:20.418 "zone_management": false, 00:09:20.418 "zone_append": false, 00:09:20.418 "compare": false, 00:09:20.418 "compare_and_write": false, 00:09:20.418 "abort": true, 00:09:20.418 "seek_hole": false, 00:09:20.418 "seek_data": false, 00:09:20.418 "copy": true, 00:09:20.418 "nvme_iov_md": false 00:09:20.418 }, 00:09:20.418 "memory_domains": [ 00:09:20.418 { 00:09:20.418 "dma_device_id": "system", 00:09:20.418 "dma_device_type": 1 00:09:20.418 }, 00:09:20.418 { 00:09:20.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.418 "dma_device_type": 2 00:09:20.418 } 00:09:20.418 ], 00:09:20.418 "driver_specific": {} 00:09:20.418 } 00:09:20.418 ] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.418 [2024-11-20 14:26:21.335301] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.418 [2024-11-20 14:26:21.335480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.418 [2024-11-20 14:26:21.335527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.418 [2024-11-20 14:26:21.338027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.418 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.419 "name": "Existed_Raid", 00:09:20.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.419 "strip_size_kb": 64, 00:09:20.419 "state": "configuring", 00:09:20.419 "raid_level": "raid0", 00:09:20.419 "superblock": false, 00:09:20.419 "num_base_bdevs": 3, 00:09:20.419 "num_base_bdevs_discovered": 2, 00:09:20.419 "num_base_bdevs_operational": 3, 00:09:20.419 "base_bdevs_list": [ 00:09:20.419 { 00:09:20.419 "name": "BaseBdev1", 00:09:20.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.419 "is_configured": false, 00:09:20.419 "data_offset": 0, 00:09:20.419 "data_size": 0 00:09:20.419 }, 00:09:20.419 { 00:09:20.419 "name": "BaseBdev2", 00:09:20.419 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:20.419 "is_configured": true, 00:09:20.419 "data_offset": 0, 00:09:20.419 "data_size": 65536 00:09:20.419 }, 00:09:20.419 { 00:09:20.419 "name": "BaseBdev3", 00:09:20.419 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:20.419 "is_configured": true, 00:09:20.419 "data_offset": 0, 00:09:20.419 "data_size": 65536 00:09:20.419 } 00:09:20.419 ] 00:09:20.419 }' 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.419 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.986 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:20.986 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.987 [2024-11-20 14:26:21.855491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.987 "name": "Existed_Raid", 00:09:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.987 "strip_size_kb": 64, 00:09:20.987 "state": "configuring", 00:09:20.987 "raid_level": "raid0", 00:09:20.987 "superblock": false, 00:09:20.987 "num_base_bdevs": 3, 00:09:20.987 "num_base_bdevs_discovered": 1, 00:09:20.987 "num_base_bdevs_operational": 3, 00:09:20.987 "base_bdevs_list": [ 00:09:20.987 { 00:09:20.987 "name": "BaseBdev1", 00:09:20.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.987 "is_configured": false, 00:09:20.987 "data_offset": 0, 00:09:20.987 "data_size": 0 00:09:20.987 }, 00:09:20.987 { 00:09:20.987 "name": null, 00:09:20.987 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:20.987 "is_configured": false, 00:09:20.987 "data_offset": 0, 00:09:20.987 "data_size": 65536 00:09:20.987 }, 00:09:20.987 { 00:09:20.987 "name": "BaseBdev3", 00:09:20.987 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:20.987 "is_configured": true, 00:09:20.987 "data_offset": 0, 00:09:20.987 "data_size": 65536 00:09:20.987 } 00:09:20.987 ] 00:09:20.987 }' 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.987 14:26:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 [2024-11-20 14:26:22.462227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:21.554 BaseBdev1 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.554 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.554 [ 00:09:21.554 { 00:09:21.554 "name": "BaseBdev1", 00:09:21.554 "aliases": [ 00:09:21.554 "5ff1b783-5ce7-4703-8bf4-3ea70fe22178" 00:09:21.554 ], 00:09:21.554 "product_name": "Malloc disk", 00:09:21.554 "block_size": 512, 00:09:21.554 "num_blocks": 65536, 00:09:21.554 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:21.554 "assigned_rate_limits": { 00:09:21.554 "rw_ios_per_sec": 0, 00:09:21.554 "rw_mbytes_per_sec": 0, 00:09:21.554 "r_mbytes_per_sec": 0, 00:09:21.554 "w_mbytes_per_sec": 0 00:09:21.554 }, 00:09:21.554 "claimed": true, 00:09:21.554 "claim_type": "exclusive_write", 00:09:21.554 "zoned": false, 00:09:21.554 "supported_io_types": { 00:09:21.554 "read": true, 00:09:21.554 "write": true, 00:09:21.554 "unmap": true, 00:09:21.554 "flush": true, 00:09:21.554 "reset": true, 00:09:21.554 "nvme_admin": false, 00:09:21.554 "nvme_io": false, 00:09:21.554 "nvme_io_md": false, 00:09:21.555 "write_zeroes": true, 00:09:21.555 "zcopy": true, 00:09:21.555 "get_zone_info": false, 00:09:21.555 "zone_management": false, 00:09:21.555 "zone_append": false, 00:09:21.555 "compare": false, 00:09:21.555 "compare_and_write": false, 00:09:21.555 "abort": true, 00:09:21.555 "seek_hole": false, 00:09:21.555 "seek_data": false, 00:09:21.555 "copy": true, 00:09:21.555 "nvme_iov_md": false 00:09:21.555 }, 00:09:21.555 "memory_domains": [ 00:09:21.555 { 00:09:21.555 "dma_device_id": "system", 00:09:21.555 "dma_device_type": 1 00:09:21.555 }, 00:09:21.555 { 00:09:21.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.555 "dma_device_type": 2 00:09:21.555 } 00:09:21.555 ], 00:09:21.555 "driver_specific": {} 00:09:21.555 } 00:09:21.555 ] 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.555 "name": "Existed_Raid", 00:09:21.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.555 "strip_size_kb": 64, 00:09:21.555 "state": "configuring", 00:09:21.555 "raid_level": "raid0", 00:09:21.555 "superblock": false, 00:09:21.555 "num_base_bdevs": 3, 00:09:21.555 "num_base_bdevs_discovered": 2, 00:09:21.555 "num_base_bdevs_operational": 3, 00:09:21.555 "base_bdevs_list": [ 00:09:21.555 { 00:09:21.555 "name": "BaseBdev1", 00:09:21.555 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:21.555 "is_configured": true, 00:09:21.555 "data_offset": 0, 00:09:21.555 "data_size": 65536 00:09:21.555 }, 00:09:21.555 { 00:09:21.555 "name": null, 00:09:21.555 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:21.555 "is_configured": false, 00:09:21.555 "data_offset": 0, 00:09:21.555 "data_size": 65536 00:09:21.555 }, 00:09:21.555 { 00:09:21.555 "name": "BaseBdev3", 00:09:21.555 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:21.555 "is_configured": true, 00:09:21.555 "data_offset": 0, 00:09:21.555 "data_size": 65536 00:09:21.555 } 00:09:21.555 ] 00:09:21.555 }' 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.555 14:26:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.122 [2024-11-20 14:26:23.070461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.122 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.122 "name": "Existed_Raid", 00:09:22.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.122 "strip_size_kb": 64, 00:09:22.122 "state": "configuring", 00:09:22.122 "raid_level": "raid0", 00:09:22.122 "superblock": false, 00:09:22.122 "num_base_bdevs": 3, 00:09:22.122 "num_base_bdevs_discovered": 1, 00:09:22.122 "num_base_bdevs_operational": 3, 00:09:22.122 "base_bdevs_list": [ 00:09:22.122 { 00:09:22.122 "name": "BaseBdev1", 00:09:22.122 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:22.122 "is_configured": true, 00:09:22.122 "data_offset": 0, 00:09:22.122 "data_size": 65536 00:09:22.122 }, 00:09:22.122 { 00:09:22.122 "name": null, 00:09:22.123 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:22.123 "is_configured": false, 00:09:22.123 "data_offset": 0, 00:09:22.123 "data_size": 65536 00:09:22.123 }, 00:09:22.123 { 00:09:22.123 "name": null, 00:09:22.123 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:22.123 "is_configured": false, 00:09:22.123 "data_offset": 0, 00:09:22.123 "data_size": 65536 00:09:22.123 } 00:09:22.123 ] 00:09:22.123 }' 00:09:22.123 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.123 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.689 [2024-11-20 14:26:23.658681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.689 "name": "Existed_Raid", 00:09:22.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.689 "strip_size_kb": 64, 00:09:22.689 "state": "configuring", 00:09:22.689 "raid_level": "raid0", 00:09:22.689 "superblock": false, 00:09:22.689 "num_base_bdevs": 3, 00:09:22.689 "num_base_bdevs_discovered": 2, 00:09:22.689 "num_base_bdevs_operational": 3, 00:09:22.689 "base_bdevs_list": [ 00:09:22.689 { 00:09:22.689 "name": "BaseBdev1", 00:09:22.689 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:22.689 "is_configured": true, 00:09:22.689 "data_offset": 0, 00:09:22.689 "data_size": 65536 00:09:22.689 }, 00:09:22.689 { 00:09:22.689 "name": null, 00:09:22.689 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:22.689 "is_configured": false, 00:09:22.689 "data_offset": 0, 00:09:22.689 "data_size": 65536 00:09:22.689 }, 00:09:22.689 { 00:09:22.689 "name": "BaseBdev3", 00:09:22.689 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:22.689 "is_configured": true, 00:09:22.689 "data_offset": 0, 00:09:22.689 "data_size": 65536 00:09:22.689 } 00:09:22.689 ] 00:09:22.689 }' 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.689 14:26:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.325 [2024-11-20 14:26:24.230832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.325 "name": "Existed_Raid", 00:09:23.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.325 "strip_size_kb": 64, 00:09:23.325 "state": "configuring", 00:09:23.325 "raid_level": "raid0", 00:09:23.325 "superblock": false, 00:09:23.325 "num_base_bdevs": 3, 00:09:23.325 "num_base_bdevs_discovered": 1, 00:09:23.325 "num_base_bdevs_operational": 3, 00:09:23.325 "base_bdevs_list": [ 00:09:23.325 { 00:09:23.325 "name": null, 00:09:23.325 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:23.325 "is_configured": false, 00:09:23.325 "data_offset": 0, 00:09:23.325 "data_size": 65536 00:09:23.325 }, 00:09:23.325 { 00:09:23.325 "name": null, 00:09:23.325 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:23.325 "is_configured": false, 00:09:23.325 "data_offset": 0, 00:09:23.325 "data_size": 65536 00:09:23.325 }, 00:09:23.325 { 00:09:23.325 "name": "BaseBdev3", 00:09:23.325 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:23.325 "is_configured": true, 00:09:23.325 "data_offset": 0, 00:09:23.325 "data_size": 65536 00:09:23.325 } 00:09:23.325 ] 00:09:23.325 }' 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.325 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.893 [2024-11-20 14:26:24.900018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.893 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.151 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.151 "name": "Existed_Raid", 00:09:24.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.151 "strip_size_kb": 64, 00:09:24.151 "state": "configuring", 00:09:24.151 "raid_level": "raid0", 00:09:24.151 "superblock": false, 00:09:24.151 "num_base_bdevs": 3, 00:09:24.151 "num_base_bdevs_discovered": 2, 00:09:24.151 "num_base_bdevs_operational": 3, 00:09:24.151 "base_bdevs_list": [ 00:09:24.151 { 00:09:24.151 "name": null, 00:09:24.151 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:24.151 "is_configured": false, 00:09:24.151 "data_offset": 0, 00:09:24.151 "data_size": 65536 00:09:24.151 }, 00:09:24.151 { 00:09:24.151 "name": "BaseBdev2", 00:09:24.151 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:24.151 "is_configured": true, 00:09:24.151 "data_offset": 0, 00:09:24.151 "data_size": 65536 00:09:24.151 }, 00:09:24.151 { 00:09:24.151 "name": "BaseBdev3", 00:09:24.151 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:24.151 "is_configured": true, 00:09:24.151 "data_offset": 0, 00:09:24.151 "data_size": 65536 00:09:24.151 } 00:09:24.151 ] 00:09:24.151 }' 00:09:24.151 14:26:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.152 14:26:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.410 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.410 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.410 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.410 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.410 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5ff1b783-5ce7-4703-8bf4-3ea70fe22178 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.669 [2024-11-20 14:26:25.578278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:24.669 [2024-11-20 14:26:25.578338] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:24.669 [2024-11-20 14:26:25.578354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:24.669 [2024-11-20 14:26:25.578706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:24.669 [2024-11-20 14:26:25.578918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:24.669 [2024-11-20 14:26:25.578935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:24.669 [2024-11-20 14:26:25.579229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.669 NewBaseBdev 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.669 [ 00:09:24.669 { 00:09:24.669 "name": "NewBaseBdev", 00:09:24.669 "aliases": [ 00:09:24.669 "5ff1b783-5ce7-4703-8bf4-3ea70fe22178" 00:09:24.669 ], 00:09:24.669 "product_name": "Malloc disk", 00:09:24.669 "block_size": 512, 00:09:24.669 "num_blocks": 65536, 00:09:24.669 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:24.669 "assigned_rate_limits": { 00:09:24.669 "rw_ios_per_sec": 0, 00:09:24.669 "rw_mbytes_per_sec": 0, 00:09:24.669 "r_mbytes_per_sec": 0, 00:09:24.669 "w_mbytes_per_sec": 0 00:09:24.669 }, 00:09:24.669 "claimed": true, 00:09:24.669 "claim_type": "exclusive_write", 00:09:24.669 "zoned": false, 00:09:24.669 "supported_io_types": { 00:09:24.669 "read": true, 00:09:24.669 "write": true, 00:09:24.669 "unmap": true, 00:09:24.669 "flush": true, 00:09:24.669 "reset": true, 00:09:24.669 "nvme_admin": false, 00:09:24.669 "nvme_io": false, 00:09:24.669 "nvme_io_md": false, 00:09:24.669 "write_zeroes": true, 00:09:24.669 "zcopy": true, 00:09:24.669 "get_zone_info": false, 00:09:24.669 "zone_management": false, 00:09:24.669 "zone_append": false, 00:09:24.669 "compare": false, 00:09:24.669 "compare_and_write": false, 00:09:24.669 "abort": true, 00:09:24.669 "seek_hole": false, 00:09:24.669 "seek_data": false, 00:09:24.669 "copy": true, 00:09:24.669 "nvme_iov_md": false 00:09:24.669 }, 00:09:24.669 "memory_domains": [ 00:09:24.669 { 00:09:24.669 "dma_device_id": "system", 00:09:24.669 "dma_device_type": 1 00:09:24.669 }, 00:09:24.669 { 00:09:24.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.669 "dma_device_type": 2 00:09:24.669 } 00:09:24.669 ], 00:09:24.669 "driver_specific": {} 00:09:24.669 } 00:09:24.669 ] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.669 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.670 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.670 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.670 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.670 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.670 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.670 "name": "Existed_Raid", 00:09:24.670 "uuid": "ba8d87e4-45c5-45ad-b36b-300442c50507", 00:09:24.670 "strip_size_kb": 64, 00:09:24.670 "state": "online", 00:09:24.670 "raid_level": "raid0", 00:09:24.670 "superblock": false, 00:09:24.670 "num_base_bdevs": 3, 00:09:24.670 "num_base_bdevs_discovered": 3, 00:09:24.670 "num_base_bdevs_operational": 3, 00:09:24.670 "base_bdevs_list": [ 00:09:24.670 { 00:09:24.670 "name": "NewBaseBdev", 00:09:24.670 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:24.670 "is_configured": true, 00:09:24.670 "data_offset": 0, 00:09:24.670 "data_size": 65536 00:09:24.670 }, 00:09:24.670 { 00:09:24.670 "name": "BaseBdev2", 00:09:24.670 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:24.670 "is_configured": true, 00:09:24.670 "data_offset": 0, 00:09:24.670 "data_size": 65536 00:09:24.670 }, 00:09:24.670 { 00:09:24.670 "name": "BaseBdev3", 00:09:24.670 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:24.670 "is_configured": true, 00:09:24.670 "data_offset": 0, 00:09:24.670 "data_size": 65536 00:09:24.670 } 00:09:24.670 ] 00:09:24.670 }' 00:09:24.670 14:26:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.670 14:26:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.238 [2024-11-20 14:26:26.118867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.238 "name": "Existed_Raid", 00:09:25.238 "aliases": [ 00:09:25.238 "ba8d87e4-45c5-45ad-b36b-300442c50507" 00:09:25.238 ], 00:09:25.238 "product_name": "Raid Volume", 00:09:25.238 "block_size": 512, 00:09:25.238 "num_blocks": 196608, 00:09:25.238 "uuid": "ba8d87e4-45c5-45ad-b36b-300442c50507", 00:09:25.238 "assigned_rate_limits": { 00:09:25.238 "rw_ios_per_sec": 0, 00:09:25.238 "rw_mbytes_per_sec": 0, 00:09:25.238 "r_mbytes_per_sec": 0, 00:09:25.238 "w_mbytes_per_sec": 0 00:09:25.238 }, 00:09:25.238 "claimed": false, 00:09:25.238 "zoned": false, 00:09:25.238 "supported_io_types": { 00:09:25.238 "read": true, 00:09:25.238 "write": true, 00:09:25.238 "unmap": true, 00:09:25.238 "flush": true, 00:09:25.238 "reset": true, 00:09:25.238 "nvme_admin": false, 00:09:25.238 "nvme_io": false, 00:09:25.238 "nvme_io_md": false, 00:09:25.238 "write_zeroes": true, 00:09:25.238 "zcopy": false, 00:09:25.238 "get_zone_info": false, 00:09:25.238 "zone_management": false, 00:09:25.238 "zone_append": false, 00:09:25.238 "compare": false, 00:09:25.238 "compare_and_write": false, 00:09:25.238 "abort": false, 00:09:25.238 "seek_hole": false, 00:09:25.238 "seek_data": false, 00:09:25.238 "copy": false, 00:09:25.238 "nvme_iov_md": false 00:09:25.238 }, 00:09:25.238 "memory_domains": [ 00:09:25.238 { 00:09:25.238 "dma_device_id": "system", 00:09:25.238 "dma_device_type": 1 00:09:25.238 }, 00:09:25.238 { 00:09:25.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.238 "dma_device_type": 2 00:09:25.238 }, 00:09:25.238 { 00:09:25.238 "dma_device_id": "system", 00:09:25.238 "dma_device_type": 1 00:09:25.238 }, 00:09:25.238 { 00:09:25.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.238 "dma_device_type": 2 00:09:25.238 }, 00:09:25.238 { 00:09:25.238 "dma_device_id": "system", 00:09:25.238 "dma_device_type": 1 00:09:25.238 }, 00:09:25.238 { 00:09:25.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.238 "dma_device_type": 2 00:09:25.238 } 00:09:25.238 ], 00:09:25.238 "driver_specific": { 00:09:25.238 "raid": { 00:09:25.238 "uuid": "ba8d87e4-45c5-45ad-b36b-300442c50507", 00:09:25.238 "strip_size_kb": 64, 00:09:25.238 "state": "online", 00:09:25.238 "raid_level": "raid0", 00:09:25.238 "superblock": false, 00:09:25.238 "num_base_bdevs": 3, 00:09:25.238 "num_base_bdevs_discovered": 3, 00:09:25.238 "num_base_bdevs_operational": 3, 00:09:25.238 "base_bdevs_list": [ 00:09:25.238 { 00:09:25.238 "name": "NewBaseBdev", 00:09:25.238 "uuid": "5ff1b783-5ce7-4703-8bf4-3ea70fe22178", 00:09:25.238 "is_configured": true, 00:09:25.238 "data_offset": 0, 00:09:25.238 "data_size": 65536 00:09:25.238 }, 00:09:25.238 { 00:09:25.238 "name": "BaseBdev2", 00:09:25.238 "uuid": "e7570f99-d69e-4987-abb0-8f017f746e98", 00:09:25.238 "is_configured": true, 00:09:25.238 "data_offset": 0, 00:09:25.238 "data_size": 65536 00:09:25.238 }, 00:09:25.238 { 00:09:25.238 "name": "BaseBdev3", 00:09:25.238 "uuid": "677ecd3a-0526-4a0c-8a49-0cadc39c0bb8", 00:09:25.238 "is_configured": true, 00:09:25.238 "data_offset": 0, 00:09:25.238 "data_size": 65536 00:09:25.238 } 00:09:25.238 ] 00:09:25.238 } 00:09:25.238 } 00:09:25.238 }' 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:25.238 BaseBdev2 00:09:25.238 BaseBdev3' 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.238 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.498 [2024-11-20 14:26:26.430602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.498 [2024-11-20 14:26:26.430797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.498 [2024-11-20 14:26:26.430950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.498 [2024-11-20 14:26:26.431031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.498 [2024-11-20 14:26:26.431053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63888 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63888 ']' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63888 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63888 00:09:25.498 killing process with pid 63888 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63888' 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63888 00:09:25.498 [2024-11-20 14:26:26.476508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.498 14:26:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63888 00:09:25.757 [2024-11-20 14:26:26.747159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:27.133 ************************************ 00:09:27.133 END TEST raid_state_function_test 00:09:27.133 00:09:27.133 real 0m11.866s 00:09:27.133 user 0m19.681s 00:09:27.133 sys 0m1.646s 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.133 ************************************ 00:09:27.133 14:26:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:27.133 14:26:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.133 14:26:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.133 14:26:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.133 ************************************ 00:09:27.133 START TEST raid_state_function_test_sb 00:09:27.133 ************************************ 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:27.133 Process raid pid: 64520 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64520 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64520' 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64520 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64520 ']' 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.133 14:26:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.133 [2024-11-20 14:26:27.988007] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:27.133 [2024-11-20 14:26:27.988454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.133 [2024-11-20 14:26:28.180596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.392 [2024-11-20 14:26:28.342143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.649 [2024-11-20 14:26:28.570869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.649 [2024-11-20 14:26:28.571129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.907 [2024-11-20 14:26:28.948206] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.907 [2024-11-20 14:26:28.948279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.907 [2024-11-20 14:26:28.948298] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.907 [2024-11-20 14:26:28.948315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.907 [2024-11-20 14:26:28.948326] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.907 [2024-11-20 14:26:28.948341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.907 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.908 14:26:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.174 14:26:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.174 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.174 "name": "Existed_Raid", 00:09:28.174 "uuid": "0bc7138f-9fa9-44bc-bc04-77ddb7a9fe34", 00:09:28.174 "strip_size_kb": 64, 00:09:28.174 "state": "configuring", 00:09:28.174 "raid_level": "raid0", 00:09:28.174 "superblock": true, 00:09:28.174 "num_base_bdevs": 3, 00:09:28.175 "num_base_bdevs_discovered": 0, 00:09:28.175 "num_base_bdevs_operational": 3, 00:09:28.175 "base_bdevs_list": [ 00:09:28.175 { 00:09:28.175 "name": "BaseBdev1", 00:09:28.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.175 "is_configured": false, 00:09:28.175 "data_offset": 0, 00:09:28.175 "data_size": 0 00:09:28.175 }, 00:09:28.175 { 00:09:28.175 "name": "BaseBdev2", 00:09:28.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.175 "is_configured": false, 00:09:28.175 "data_offset": 0, 00:09:28.175 "data_size": 0 00:09:28.175 }, 00:09:28.175 { 00:09:28.175 "name": "BaseBdev3", 00:09:28.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.175 "is_configured": false, 00:09:28.175 "data_offset": 0, 00:09:28.175 "data_size": 0 00:09:28.175 } 00:09:28.175 ] 00:09:28.175 }' 00:09:28.175 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.175 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.503 [2024-11-20 14:26:29.476295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.503 [2024-11-20 14:26:29.476341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.503 [2024-11-20 14:26:29.484286] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.503 [2024-11-20 14:26:29.484348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.503 [2024-11-20 14:26:29.484365] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.503 [2024-11-20 14:26:29.484383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.503 [2024-11-20 14:26:29.484393] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.503 [2024-11-20 14:26:29.484408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.503 [2024-11-20 14:26:29.530233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.503 BaseBdev1 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.503 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.503 [ 00:09:28.503 { 00:09:28.503 "name": "BaseBdev1", 00:09:28.503 "aliases": [ 00:09:28.503 "f8bd6df3-bb6e-4703-b45b-8ea3d79d5f88" 00:09:28.503 ], 00:09:28.503 "product_name": "Malloc disk", 00:09:28.503 "block_size": 512, 00:09:28.503 "num_blocks": 65536, 00:09:28.503 "uuid": "f8bd6df3-bb6e-4703-b45b-8ea3d79d5f88", 00:09:28.503 "assigned_rate_limits": { 00:09:28.503 "rw_ios_per_sec": 0, 00:09:28.503 "rw_mbytes_per_sec": 0, 00:09:28.503 "r_mbytes_per_sec": 0, 00:09:28.503 "w_mbytes_per_sec": 0 00:09:28.503 }, 00:09:28.503 "claimed": true, 00:09:28.503 "claim_type": "exclusive_write", 00:09:28.503 "zoned": false, 00:09:28.503 "supported_io_types": { 00:09:28.503 "read": true, 00:09:28.503 "write": true, 00:09:28.503 "unmap": true, 00:09:28.503 "flush": true, 00:09:28.503 "reset": true, 00:09:28.503 "nvme_admin": false, 00:09:28.503 "nvme_io": false, 00:09:28.503 "nvme_io_md": false, 00:09:28.503 "write_zeroes": true, 00:09:28.503 "zcopy": true, 00:09:28.503 "get_zone_info": false, 00:09:28.503 "zone_management": false, 00:09:28.503 "zone_append": false, 00:09:28.503 "compare": false, 00:09:28.503 "compare_and_write": false, 00:09:28.503 "abort": true, 00:09:28.762 "seek_hole": false, 00:09:28.762 "seek_data": false, 00:09:28.762 "copy": true, 00:09:28.762 "nvme_iov_md": false 00:09:28.762 }, 00:09:28.762 "memory_domains": [ 00:09:28.762 { 00:09:28.762 "dma_device_id": "system", 00:09:28.762 "dma_device_type": 1 00:09:28.762 }, 00:09:28.762 { 00:09:28.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.762 "dma_device_type": 2 00:09:28.762 } 00:09:28.762 ], 00:09:28.762 "driver_specific": {} 00:09:28.762 } 00:09:28.762 ] 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.762 "name": "Existed_Raid", 00:09:28.762 "uuid": "92d54542-9ba2-43fe-a0c6-03972484fdb0", 00:09:28.762 "strip_size_kb": 64, 00:09:28.762 "state": "configuring", 00:09:28.762 "raid_level": "raid0", 00:09:28.762 "superblock": true, 00:09:28.762 "num_base_bdevs": 3, 00:09:28.762 "num_base_bdevs_discovered": 1, 00:09:28.762 "num_base_bdevs_operational": 3, 00:09:28.762 "base_bdevs_list": [ 00:09:28.762 { 00:09:28.762 "name": "BaseBdev1", 00:09:28.762 "uuid": "f8bd6df3-bb6e-4703-b45b-8ea3d79d5f88", 00:09:28.762 "is_configured": true, 00:09:28.762 "data_offset": 2048, 00:09:28.762 "data_size": 63488 00:09:28.762 }, 00:09:28.762 { 00:09:28.762 "name": "BaseBdev2", 00:09:28.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.762 "is_configured": false, 00:09:28.762 "data_offset": 0, 00:09:28.762 "data_size": 0 00:09:28.762 }, 00:09:28.762 { 00:09:28.762 "name": "BaseBdev3", 00:09:28.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.762 "is_configured": false, 00:09:28.762 "data_offset": 0, 00:09:28.762 "data_size": 0 00:09:28.762 } 00:09:28.762 ] 00:09:28.762 }' 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.762 14:26:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.020 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.020 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.020 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.020 [2024-11-20 14:26:30.066437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.020 [2024-11-20 14:26:30.066662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:29.020 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.020 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.020 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.020 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.020 [2024-11-20 14:26:30.074490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.279 [2024-11-20 14:26:30.077094] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.279 [2024-11-20 14:26:30.077296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.279 [2024-11-20 14:26:30.077336] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.279 [2024-11-20 14:26:30.077354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.279 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.279 "name": "Existed_Raid", 00:09:29.279 "uuid": "7d70d2cc-3e79-4f64-a481-e25a99bca7f7", 00:09:29.279 "strip_size_kb": 64, 00:09:29.279 "state": "configuring", 00:09:29.279 "raid_level": "raid0", 00:09:29.279 "superblock": true, 00:09:29.279 "num_base_bdevs": 3, 00:09:29.279 "num_base_bdevs_discovered": 1, 00:09:29.279 "num_base_bdevs_operational": 3, 00:09:29.279 "base_bdevs_list": [ 00:09:29.279 { 00:09:29.279 "name": "BaseBdev1", 00:09:29.279 "uuid": "f8bd6df3-bb6e-4703-b45b-8ea3d79d5f88", 00:09:29.279 "is_configured": true, 00:09:29.279 "data_offset": 2048, 00:09:29.279 "data_size": 63488 00:09:29.279 }, 00:09:29.279 { 00:09:29.279 "name": "BaseBdev2", 00:09:29.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.279 "is_configured": false, 00:09:29.279 "data_offset": 0, 00:09:29.279 "data_size": 0 00:09:29.279 }, 00:09:29.279 { 00:09:29.279 "name": "BaseBdev3", 00:09:29.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.279 "is_configured": false, 00:09:29.279 "data_offset": 0, 00:09:29.280 "data_size": 0 00:09:29.280 } 00:09:29.280 ] 00:09:29.280 }' 00:09:29.280 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.280 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.539 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.539 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.539 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.797 [2024-11-20 14:26:30.613400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.797 BaseBdev2 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.797 [ 00:09:29.797 { 00:09:29.797 "name": "BaseBdev2", 00:09:29.797 "aliases": [ 00:09:29.797 "6eec2f77-3f0e-4b3e-86da-139bcf3f96f1" 00:09:29.797 ], 00:09:29.797 "product_name": "Malloc disk", 00:09:29.797 "block_size": 512, 00:09:29.797 "num_blocks": 65536, 00:09:29.797 "uuid": "6eec2f77-3f0e-4b3e-86da-139bcf3f96f1", 00:09:29.797 "assigned_rate_limits": { 00:09:29.797 "rw_ios_per_sec": 0, 00:09:29.797 "rw_mbytes_per_sec": 0, 00:09:29.797 "r_mbytes_per_sec": 0, 00:09:29.797 "w_mbytes_per_sec": 0 00:09:29.797 }, 00:09:29.797 "claimed": true, 00:09:29.797 "claim_type": "exclusive_write", 00:09:29.797 "zoned": false, 00:09:29.797 "supported_io_types": { 00:09:29.797 "read": true, 00:09:29.797 "write": true, 00:09:29.797 "unmap": true, 00:09:29.797 "flush": true, 00:09:29.797 "reset": true, 00:09:29.797 "nvme_admin": false, 00:09:29.797 "nvme_io": false, 00:09:29.797 "nvme_io_md": false, 00:09:29.797 "write_zeroes": true, 00:09:29.797 "zcopy": true, 00:09:29.797 "get_zone_info": false, 00:09:29.797 "zone_management": false, 00:09:29.797 "zone_append": false, 00:09:29.797 "compare": false, 00:09:29.797 "compare_and_write": false, 00:09:29.797 "abort": true, 00:09:29.797 "seek_hole": false, 00:09:29.797 "seek_data": false, 00:09:29.797 "copy": true, 00:09:29.797 "nvme_iov_md": false 00:09:29.797 }, 00:09:29.797 "memory_domains": [ 00:09:29.797 { 00:09:29.797 "dma_device_id": "system", 00:09:29.797 "dma_device_type": 1 00:09:29.797 }, 00:09:29.797 { 00:09:29.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.797 "dma_device_type": 2 00:09:29.797 } 00:09:29.797 ], 00:09:29.797 "driver_specific": {} 00:09:29.797 } 00:09:29.797 ] 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.797 "name": "Existed_Raid", 00:09:29.797 "uuid": "7d70d2cc-3e79-4f64-a481-e25a99bca7f7", 00:09:29.797 "strip_size_kb": 64, 00:09:29.797 "state": "configuring", 00:09:29.797 "raid_level": "raid0", 00:09:29.797 "superblock": true, 00:09:29.797 "num_base_bdevs": 3, 00:09:29.797 "num_base_bdevs_discovered": 2, 00:09:29.797 "num_base_bdevs_operational": 3, 00:09:29.797 "base_bdevs_list": [ 00:09:29.797 { 00:09:29.797 "name": "BaseBdev1", 00:09:29.797 "uuid": "f8bd6df3-bb6e-4703-b45b-8ea3d79d5f88", 00:09:29.797 "is_configured": true, 00:09:29.797 "data_offset": 2048, 00:09:29.797 "data_size": 63488 00:09:29.797 }, 00:09:29.797 { 00:09:29.797 "name": "BaseBdev2", 00:09:29.797 "uuid": "6eec2f77-3f0e-4b3e-86da-139bcf3f96f1", 00:09:29.797 "is_configured": true, 00:09:29.797 "data_offset": 2048, 00:09:29.797 "data_size": 63488 00:09:29.797 }, 00:09:29.797 { 00:09:29.797 "name": "BaseBdev3", 00:09:29.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.797 "is_configured": false, 00:09:29.797 "data_offset": 0, 00:09:29.797 "data_size": 0 00:09:29.797 } 00:09:29.797 ] 00:09:29.797 }' 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.797 14:26:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.363 [2024-11-20 14:26:31.186394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.363 [2024-11-20 14:26:31.186976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:30.363 [2024-11-20 14:26:31.187016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.363 BaseBdev3 00:09:30.363 [2024-11-20 14:26:31.187361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.363 [2024-11-20 14:26:31.187571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:30.363 [2024-11-20 14:26:31.187590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:30.363 [2024-11-20 14:26:31.187796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.363 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.363 [ 00:09:30.363 { 00:09:30.363 "name": "BaseBdev3", 00:09:30.363 "aliases": [ 00:09:30.363 "3a4b4d42-8712-48cb-92f2-c7ef877b86ca" 00:09:30.363 ], 00:09:30.363 "product_name": "Malloc disk", 00:09:30.363 "block_size": 512, 00:09:30.363 "num_blocks": 65536, 00:09:30.363 "uuid": "3a4b4d42-8712-48cb-92f2-c7ef877b86ca", 00:09:30.363 "assigned_rate_limits": { 00:09:30.363 "rw_ios_per_sec": 0, 00:09:30.363 "rw_mbytes_per_sec": 0, 00:09:30.363 "r_mbytes_per_sec": 0, 00:09:30.363 "w_mbytes_per_sec": 0 00:09:30.363 }, 00:09:30.363 "claimed": true, 00:09:30.364 "claim_type": "exclusive_write", 00:09:30.364 "zoned": false, 00:09:30.364 "supported_io_types": { 00:09:30.364 "read": true, 00:09:30.364 "write": true, 00:09:30.364 "unmap": true, 00:09:30.364 "flush": true, 00:09:30.364 "reset": true, 00:09:30.364 "nvme_admin": false, 00:09:30.364 "nvme_io": false, 00:09:30.364 "nvme_io_md": false, 00:09:30.364 "write_zeroes": true, 00:09:30.364 "zcopy": true, 00:09:30.364 "get_zone_info": false, 00:09:30.364 "zone_management": false, 00:09:30.364 "zone_append": false, 00:09:30.364 "compare": false, 00:09:30.364 "compare_and_write": false, 00:09:30.364 "abort": true, 00:09:30.364 "seek_hole": false, 00:09:30.364 "seek_data": false, 00:09:30.364 "copy": true, 00:09:30.364 "nvme_iov_md": false 00:09:30.364 }, 00:09:30.364 "memory_domains": [ 00:09:30.364 { 00:09:30.364 "dma_device_id": "system", 00:09:30.364 "dma_device_type": 1 00:09:30.364 }, 00:09:30.364 { 00:09:30.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.364 "dma_device_type": 2 00:09:30.364 } 00:09:30.364 ], 00:09:30.364 "driver_specific": {} 00:09:30.364 } 00:09:30.364 ] 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.364 "name": "Existed_Raid", 00:09:30.364 "uuid": "7d70d2cc-3e79-4f64-a481-e25a99bca7f7", 00:09:30.364 "strip_size_kb": 64, 00:09:30.364 "state": "online", 00:09:30.364 "raid_level": "raid0", 00:09:30.364 "superblock": true, 00:09:30.364 "num_base_bdevs": 3, 00:09:30.364 "num_base_bdevs_discovered": 3, 00:09:30.364 "num_base_bdevs_operational": 3, 00:09:30.364 "base_bdevs_list": [ 00:09:30.364 { 00:09:30.364 "name": "BaseBdev1", 00:09:30.364 "uuid": "f8bd6df3-bb6e-4703-b45b-8ea3d79d5f88", 00:09:30.364 "is_configured": true, 00:09:30.364 "data_offset": 2048, 00:09:30.364 "data_size": 63488 00:09:30.364 }, 00:09:30.364 { 00:09:30.364 "name": "BaseBdev2", 00:09:30.364 "uuid": "6eec2f77-3f0e-4b3e-86da-139bcf3f96f1", 00:09:30.364 "is_configured": true, 00:09:30.364 "data_offset": 2048, 00:09:30.364 "data_size": 63488 00:09:30.364 }, 00:09:30.364 { 00:09:30.364 "name": "BaseBdev3", 00:09:30.364 "uuid": "3a4b4d42-8712-48cb-92f2-c7ef877b86ca", 00:09:30.364 "is_configured": true, 00:09:30.364 "data_offset": 2048, 00:09:30.364 "data_size": 63488 00:09:30.364 } 00:09:30.364 ] 00:09:30.364 }' 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.364 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.930 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.930 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.930 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.930 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.930 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.930 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.931 [2024-11-20 14:26:31.727001] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.931 "name": "Existed_Raid", 00:09:30.931 "aliases": [ 00:09:30.931 "7d70d2cc-3e79-4f64-a481-e25a99bca7f7" 00:09:30.931 ], 00:09:30.931 "product_name": "Raid Volume", 00:09:30.931 "block_size": 512, 00:09:30.931 "num_blocks": 190464, 00:09:30.931 "uuid": "7d70d2cc-3e79-4f64-a481-e25a99bca7f7", 00:09:30.931 "assigned_rate_limits": { 00:09:30.931 "rw_ios_per_sec": 0, 00:09:30.931 "rw_mbytes_per_sec": 0, 00:09:30.931 "r_mbytes_per_sec": 0, 00:09:30.931 "w_mbytes_per_sec": 0 00:09:30.931 }, 00:09:30.931 "claimed": false, 00:09:30.931 "zoned": false, 00:09:30.931 "supported_io_types": { 00:09:30.931 "read": true, 00:09:30.931 "write": true, 00:09:30.931 "unmap": true, 00:09:30.931 "flush": true, 00:09:30.931 "reset": true, 00:09:30.931 "nvme_admin": false, 00:09:30.931 "nvme_io": false, 00:09:30.931 "nvme_io_md": false, 00:09:30.931 "write_zeroes": true, 00:09:30.931 "zcopy": false, 00:09:30.931 "get_zone_info": false, 00:09:30.931 "zone_management": false, 00:09:30.931 "zone_append": false, 00:09:30.931 "compare": false, 00:09:30.931 "compare_and_write": false, 00:09:30.931 "abort": false, 00:09:30.931 "seek_hole": false, 00:09:30.931 "seek_data": false, 00:09:30.931 "copy": false, 00:09:30.931 "nvme_iov_md": false 00:09:30.931 }, 00:09:30.931 "memory_domains": [ 00:09:30.931 { 00:09:30.931 "dma_device_id": "system", 00:09:30.931 "dma_device_type": 1 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.931 "dma_device_type": 2 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "dma_device_id": "system", 00:09:30.931 "dma_device_type": 1 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.931 "dma_device_type": 2 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "dma_device_id": "system", 00:09:30.931 "dma_device_type": 1 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.931 "dma_device_type": 2 00:09:30.931 } 00:09:30.931 ], 00:09:30.931 "driver_specific": { 00:09:30.931 "raid": { 00:09:30.931 "uuid": "7d70d2cc-3e79-4f64-a481-e25a99bca7f7", 00:09:30.931 "strip_size_kb": 64, 00:09:30.931 "state": "online", 00:09:30.931 "raid_level": "raid0", 00:09:30.931 "superblock": true, 00:09:30.931 "num_base_bdevs": 3, 00:09:30.931 "num_base_bdevs_discovered": 3, 00:09:30.931 "num_base_bdevs_operational": 3, 00:09:30.931 "base_bdevs_list": [ 00:09:30.931 { 00:09:30.931 "name": "BaseBdev1", 00:09:30.931 "uuid": "f8bd6df3-bb6e-4703-b45b-8ea3d79d5f88", 00:09:30.931 "is_configured": true, 00:09:30.931 "data_offset": 2048, 00:09:30.931 "data_size": 63488 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "name": "BaseBdev2", 00:09:30.931 "uuid": "6eec2f77-3f0e-4b3e-86da-139bcf3f96f1", 00:09:30.931 "is_configured": true, 00:09:30.931 "data_offset": 2048, 00:09:30.931 "data_size": 63488 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "name": "BaseBdev3", 00:09:30.931 "uuid": "3a4b4d42-8712-48cb-92f2-c7ef877b86ca", 00:09:30.931 "is_configured": true, 00:09:30.931 "data_offset": 2048, 00:09:30.931 "data_size": 63488 00:09:30.931 } 00:09:30.931 ] 00:09:30.931 } 00:09:30.931 } 00:09:30.931 }' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:30.931 BaseBdev2 00:09:30.931 BaseBdev3' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.931 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.190 14:26:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.190 [2024-11-20 14:26:32.034695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.190 [2024-11-20 14:26:32.034731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.190 [2024-11-20 14:26:32.034806] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.190 "name": "Existed_Raid", 00:09:31.190 "uuid": "7d70d2cc-3e79-4f64-a481-e25a99bca7f7", 00:09:31.190 "strip_size_kb": 64, 00:09:31.190 "state": "offline", 00:09:31.190 "raid_level": "raid0", 00:09:31.190 "superblock": true, 00:09:31.190 "num_base_bdevs": 3, 00:09:31.190 "num_base_bdevs_discovered": 2, 00:09:31.190 "num_base_bdevs_operational": 2, 00:09:31.190 "base_bdevs_list": [ 00:09:31.190 { 00:09:31.190 "name": null, 00:09:31.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.190 "is_configured": false, 00:09:31.190 "data_offset": 0, 00:09:31.190 "data_size": 63488 00:09:31.190 }, 00:09:31.190 { 00:09:31.190 "name": "BaseBdev2", 00:09:31.190 "uuid": "6eec2f77-3f0e-4b3e-86da-139bcf3f96f1", 00:09:31.190 "is_configured": true, 00:09:31.190 "data_offset": 2048, 00:09:31.190 "data_size": 63488 00:09:31.190 }, 00:09:31.190 { 00:09:31.190 "name": "BaseBdev3", 00:09:31.190 "uuid": "3a4b4d42-8712-48cb-92f2-c7ef877b86ca", 00:09:31.190 "is_configured": true, 00:09:31.190 "data_offset": 2048, 00:09:31.190 "data_size": 63488 00:09:31.190 } 00:09:31.190 ] 00:09:31.190 }' 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.190 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.755 [2024-11-20 14:26:32.696133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.755 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.013 [2024-11-20 14:26:32.839353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.013 [2024-11-20 14:26:32.839421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.013 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.014 14:26:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.014 BaseBdev2 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.014 [ 00:09:32.014 { 00:09:32.014 "name": "BaseBdev2", 00:09:32.014 "aliases": [ 00:09:32.014 "9cbbd5a9-f0d2-4010-a34c-48856926acaf" 00:09:32.014 ], 00:09:32.014 "product_name": "Malloc disk", 00:09:32.014 "block_size": 512, 00:09:32.014 "num_blocks": 65536, 00:09:32.014 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:32.014 "assigned_rate_limits": { 00:09:32.014 "rw_ios_per_sec": 0, 00:09:32.014 "rw_mbytes_per_sec": 0, 00:09:32.014 "r_mbytes_per_sec": 0, 00:09:32.014 "w_mbytes_per_sec": 0 00:09:32.014 }, 00:09:32.014 "claimed": false, 00:09:32.014 "zoned": false, 00:09:32.014 "supported_io_types": { 00:09:32.014 "read": true, 00:09:32.014 "write": true, 00:09:32.014 "unmap": true, 00:09:32.014 "flush": true, 00:09:32.014 "reset": true, 00:09:32.014 "nvme_admin": false, 00:09:32.014 "nvme_io": false, 00:09:32.014 "nvme_io_md": false, 00:09:32.014 "write_zeroes": true, 00:09:32.014 "zcopy": true, 00:09:32.014 "get_zone_info": false, 00:09:32.014 "zone_management": false, 00:09:32.014 "zone_append": false, 00:09:32.014 "compare": false, 00:09:32.014 "compare_and_write": false, 00:09:32.014 "abort": true, 00:09:32.014 "seek_hole": false, 00:09:32.014 "seek_data": false, 00:09:32.014 "copy": true, 00:09:32.014 "nvme_iov_md": false 00:09:32.014 }, 00:09:32.014 "memory_domains": [ 00:09:32.014 { 00:09:32.014 "dma_device_id": "system", 00:09:32.014 "dma_device_type": 1 00:09:32.014 }, 00:09:32.014 { 00:09:32.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.014 "dma_device_type": 2 00:09:32.014 } 00:09:32.014 ], 00:09:32.014 "driver_specific": {} 00:09:32.014 } 00:09:32.014 ] 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.014 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.271 BaseBdev3 00:09:32.271 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.271 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:32.271 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.271 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.271 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.271 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.271 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 [ 00:09:32.272 { 00:09:32.272 "name": "BaseBdev3", 00:09:32.272 "aliases": [ 00:09:32.272 "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06" 00:09:32.272 ], 00:09:32.272 "product_name": "Malloc disk", 00:09:32.272 "block_size": 512, 00:09:32.272 "num_blocks": 65536, 00:09:32.272 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:32.272 "assigned_rate_limits": { 00:09:32.272 "rw_ios_per_sec": 0, 00:09:32.272 "rw_mbytes_per_sec": 0, 00:09:32.272 "r_mbytes_per_sec": 0, 00:09:32.272 "w_mbytes_per_sec": 0 00:09:32.272 }, 00:09:32.272 "claimed": false, 00:09:32.272 "zoned": false, 00:09:32.272 "supported_io_types": { 00:09:32.272 "read": true, 00:09:32.272 "write": true, 00:09:32.272 "unmap": true, 00:09:32.272 "flush": true, 00:09:32.272 "reset": true, 00:09:32.272 "nvme_admin": false, 00:09:32.272 "nvme_io": false, 00:09:32.272 "nvme_io_md": false, 00:09:32.272 "write_zeroes": true, 00:09:32.272 "zcopy": true, 00:09:32.272 "get_zone_info": false, 00:09:32.272 "zone_management": false, 00:09:32.272 "zone_append": false, 00:09:32.272 "compare": false, 00:09:32.272 "compare_and_write": false, 00:09:32.272 "abort": true, 00:09:32.272 "seek_hole": false, 00:09:32.272 "seek_data": false, 00:09:32.272 "copy": true, 00:09:32.272 "nvme_iov_md": false 00:09:32.272 }, 00:09:32.272 "memory_domains": [ 00:09:32.272 { 00:09:32.272 "dma_device_id": "system", 00:09:32.272 "dma_device_type": 1 00:09:32.272 }, 00:09:32.272 { 00:09:32.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.272 "dma_device_type": 2 00:09:32.272 } 00:09:32.272 ], 00:09:32.272 "driver_specific": {} 00:09:32.272 } 00:09:32.272 ] 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 [2024-11-20 14:26:33.126367] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.272 [2024-11-20 14:26:33.126427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.272 [2024-11-20 14:26:33.126461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.272 [2024-11-20 14:26:33.128925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.272 "name": "Existed_Raid", 00:09:32.272 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:32.272 "strip_size_kb": 64, 00:09:32.272 "state": "configuring", 00:09:32.272 "raid_level": "raid0", 00:09:32.272 "superblock": true, 00:09:32.272 "num_base_bdevs": 3, 00:09:32.272 "num_base_bdevs_discovered": 2, 00:09:32.272 "num_base_bdevs_operational": 3, 00:09:32.272 "base_bdevs_list": [ 00:09:32.272 { 00:09:32.272 "name": "BaseBdev1", 00:09:32.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.272 "is_configured": false, 00:09:32.272 "data_offset": 0, 00:09:32.272 "data_size": 0 00:09:32.272 }, 00:09:32.272 { 00:09:32.272 "name": "BaseBdev2", 00:09:32.272 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:32.272 "is_configured": true, 00:09:32.272 "data_offset": 2048, 00:09:32.272 "data_size": 63488 00:09:32.272 }, 00:09:32.272 { 00:09:32.272 "name": "BaseBdev3", 00:09:32.272 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:32.272 "is_configured": true, 00:09:32.272 "data_offset": 2048, 00:09:32.272 "data_size": 63488 00:09:32.272 } 00:09:32.272 ] 00:09:32.272 }' 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.272 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 [2024-11-20 14:26:33.638539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.851 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.851 "name": "Existed_Raid", 00:09:32.851 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:32.851 "strip_size_kb": 64, 00:09:32.852 "state": "configuring", 00:09:32.852 "raid_level": "raid0", 00:09:32.852 "superblock": true, 00:09:32.852 "num_base_bdevs": 3, 00:09:32.852 "num_base_bdevs_discovered": 1, 00:09:32.852 "num_base_bdevs_operational": 3, 00:09:32.852 "base_bdevs_list": [ 00:09:32.852 { 00:09:32.852 "name": "BaseBdev1", 00:09:32.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.852 "is_configured": false, 00:09:32.852 "data_offset": 0, 00:09:32.852 "data_size": 0 00:09:32.852 }, 00:09:32.852 { 00:09:32.852 "name": null, 00:09:32.852 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:32.852 "is_configured": false, 00:09:32.852 "data_offset": 0, 00:09:32.852 "data_size": 63488 00:09:32.852 }, 00:09:32.852 { 00:09:32.852 "name": "BaseBdev3", 00:09:32.852 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:32.852 "is_configured": true, 00:09:32.852 "data_offset": 2048, 00:09:32.852 "data_size": 63488 00:09:32.852 } 00:09:32.852 ] 00:09:32.852 }' 00:09:32.852 14:26:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.852 14:26:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.141 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.141 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.141 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.141 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.400 [2024-11-20 14:26:34.280833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.400 BaseBdev1 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.400 [ 00:09:33.400 { 00:09:33.400 "name": "BaseBdev1", 00:09:33.400 "aliases": [ 00:09:33.400 "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b" 00:09:33.400 ], 00:09:33.400 "product_name": "Malloc disk", 00:09:33.400 "block_size": 512, 00:09:33.400 "num_blocks": 65536, 00:09:33.400 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:33.400 "assigned_rate_limits": { 00:09:33.400 "rw_ios_per_sec": 0, 00:09:33.400 "rw_mbytes_per_sec": 0, 00:09:33.400 "r_mbytes_per_sec": 0, 00:09:33.400 "w_mbytes_per_sec": 0 00:09:33.400 }, 00:09:33.400 "claimed": true, 00:09:33.400 "claim_type": "exclusive_write", 00:09:33.400 "zoned": false, 00:09:33.400 "supported_io_types": { 00:09:33.400 "read": true, 00:09:33.400 "write": true, 00:09:33.400 "unmap": true, 00:09:33.400 "flush": true, 00:09:33.400 "reset": true, 00:09:33.400 "nvme_admin": false, 00:09:33.400 "nvme_io": false, 00:09:33.400 "nvme_io_md": false, 00:09:33.400 "write_zeroes": true, 00:09:33.400 "zcopy": true, 00:09:33.400 "get_zone_info": false, 00:09:33.400 "zone_management": false, 00:09:33.400 "zone_append": false, 00:09:33.400 "compare": false, 00:09:33.400 "compare_and_write": false, 00:09:33.400 "abort": true, 00:09:33.400 "seek_hole": false, 00:09:33.400 "seek_data": false, 00:09:33.400 "copy": true, 00:09:33.400 "nvme_iov_md": false 00:09:33.400 }, 00:09:33.400 "memory_domains": [ 00:09:33.400 { 00:09:33.400 "dma_device_id": "system", 00:09:33.400 "dma_device_type": 1 00:09:33.400 }, 00:09:33.400 { 00:09:33.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.400 "dma_device_type": 2 00:09:33.400 } 00:09:33.400 ], 00:09:33.400 "driver_specific": {} 00:09:33.400 } 00:09:33.400 ] 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.400 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.401 "name": "Existed_Raid", 00:09:33.401 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:33.401 "strip_size_kb": 64, 00:09:33.401 "state": "configuring", 00:09:33.401 "raid_level": "raid0", 00:09:33.401 "superblock": true, 00:09:33.401 "num_base_bdevs": 3, 00:09:33.401 "num_base_bdevs_discovered": 2, 00:09:33.401 "num_base_bdevs_operational": 3, 00:09:33.401 "base_bdevs_list": [ 00:09:33.401 { 00:09:33.401 "name": "BaseBdev1", 00:09:33.401 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:33.401 "is_configured": true, 00:09:33.401 "data_offset": 2048, 00:09:33.401 "data_size": 63488 00:09:33.401 }, 00:09:33.401 { 00:09:33.401 "name": null, 00:09:33.401 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:33.401 "is_configured": false, 00:09:33.401 "data_offset": 0, 00:09:33.401 "data_size": 63488 00:09:33.401 }, 00:09:33.401 { 00:09:33.401 "name": "BaseBdev3", 00:09:33.401 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:33.401 "is_configured": true, 00:09:33.401 "data_offset": 2048, 00:09:33.401 "data_size": 63488 00:09:33.401 } 00:09:33.401 ] 00:09:33.401 }' 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.401 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.968 [2024-11-20 14:26:34.913070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.968 "name": "Existed_Raid", 00:09:33.968 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:33.968 "strip_size_kb": 64, 00:09:33.968 "state": "configuring", 00:09:33.968 "raid_level": "raid0", 00:09:33.968 "superblock": true, 00:09:33.968 "num_base_bdevs": 3, 00:09:33.968 "num_base_bdevs_discovered": 1, 00:09:33.968 "num_base_bdevs_operational": 3, 00:09:33.968 "base_bdevs_list": [ 00:09:33.968 { 00:09:33.968 "name": "BaseBdev1", 00:09:33.968 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:33.968 "is_configured": true, 00:09:33.968 "data_offset": 2048, 00:09:33.968 "data_size": 63488 00:09:33.968 }, 00:09:33.968 { 00:09:33.968 "name": null, 00:09:33.968 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:33.968 "is_configured": false, 00:09:33.968 "data_offset": 0, 00:09:33.968 "data_size": 63488 00:09:33.968 }, 00:09:33.968 { 00:09:33.968 "name": null, 00:09:33.968 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:33.968 "is_configured": false, 00:09:33.968 "data_offset": 0, 00:09:33.968 "data_size": 63488 00:09:33.968 } 00:09:33.968 ] 00:09:33.968 }' 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.968 14:26:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:34.533 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.534 [2024-11-20 14:26:35.505276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.534 "name": "Existed_Raid", 00:09:34.534 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:34.534 "strip_size_kb": 64, 00:09:34.534 "state": "configuring", 00:09:34.534 "raid_level": "raid0", 00:09:34.534 "superblock": true, 00:09:34.534 "num_base_bdevs": 3, 00:09:34.534 "num_base_bdevs_discovered": 2, 00:09:34.534 "num_base_bdevs_operational": 3, 00:09:34.534 "base_bdevs_list": [ 00:09:34.534 { 00:09:34.534 "name": "BaseBdev1", 00:09:34.534 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:34.534 "is_configured": true, 00:09:34.534 "data_offset": 2048, 00:09:34.534 "data_size": 63488 00:09:34.534 }, 00:09:34.534 { 00:09:34.534 "name": null, 00:09:34.534 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:34.534 "is_configured": false, 00:09:34.534 "data_offset": 0, 00:09:34.534 "data_size": 63488 00:09:34.534 }, 00:09:34.534 { 00:09:34.534 "name": "BaseBdev3", 00:09:34.534 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:34.534 "is_configured": true, 00:09:34.534 "data_offset": 2048, 00:09:34.534 "data_size": 63488 00:09:34.534 } 00:09:34.534 ] 00:09:34.534 }' 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.534 14:26:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.101 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.101 [2024-11-20 14:26:36.093408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.359 "name": "Existed_Raid", 00:09:35.359 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:35.359 "strip_size_kb": 64, 00:09:35.359 "state": "configuring", 00:09:35.359 "raid_level": "raid0", 00:09:35.359 "superblock": true, 00:09:35.359 "num_base_bdevs": 3, 00:09:35.359 "num_base_bdevs_discovered": 1, 00:09:35.359 "num_base_bdevs_operational": 3, 00:09:35.359 "base_bdevs_list": [ 00:09:35.359 { 00:09:35.359 "name": null, 00:09:35.359 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:35.359 "is_configured": false, 00:09:35.359 "data_offset": 0, 00:09:35.359 "data_size": 63488 00:09:35.359 }, 00:09:35.359 { 00:09:35.359 "name": null, 00:09:35.359 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:35.359 "is_configured": false, 00:09:35.359 "data_offset": 0, 00:09:35.359 "data_size": 63488 00:09:35.359 }, 00:09:35.359 { 00:09:35.359 "name": "BaseBdev3", 00:09:35.359 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:35.359 "is_configured": true, 00:09:35.359 "data_offset": 2048, 00:09:35.359 "data_size": 63488 00:09:35.359 } 00:09:35.359 ] 00:09:35.359 }' 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.359 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.617 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.617 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.617 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.617 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.876 [2024-11-20 14:26:36.721038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.876 "name": "Existed_Raid", 00:09:35.876 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:35.876 "strip_size_kb": 64, 00:09:35.876 "state": "configuring", 00:09:35.876 "raid_level": "raid0", 00:09:35.876 "superblock": true, 00:09:35.876 "num_base_bdevs": 3, 00:09:35.876 "num_base_bdevs_discovered": 2, 00:09:35.876 "num_base_bdevs_operational": 3, 00:09:35.876 "base_bdevs_list": [ 00:09:35.876 { 00:09:35.876 "name": null, 00:09:35.876 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:35.876 "is_configured": false, 00:09:35.876 "data_offset": 0, 00:09:35.876 "data_size": 63488 00:09:35.876 }, 00:09:35.876 { 00:09:35.876 "name": "BaseBdev2", 00:09:35.876 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:35.876 "is_configured": true, 00:09:35.876 "data_offset": 2048, 00:09:35.876 "data_size": 63488 00:09:35.876 }, 00:09:35.876 { 00:09:35.876 "name": "BaseBdev3", 00:09:35.876 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:35.876 "is_configured": true, 00:09:35.876 "data_offset": 2048, 00:09:35.876 "data_size": 63488 00:09:35.876 } 00:09:35.876 ] 00:09:35.876 }' 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.876 14:26:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.443 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.443 [2024-11-20 14:26:37.363340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.443 [2024-11-20 14:26:37.363607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:36.443 [2024-11-20 14:26:37.363645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:36.443 [2024-11-20 14:26:37.363952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:36.443 NewBaseBdev 00:09:36.444 [2024-11-20 14:26:37.364141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:36.444 [2024-11-20 14:26:37.364157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:36.444 [2024-11-20 14:26:37.364325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 [ 00:09:36.444 { 00:09:36.444 "name": "NewBaseBdev", 00:09:36.444 "aliases": [ 00:09:36.444 "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b" 00:09:36.444 ], 00:09:36.444 "product_name": "Malloc disk", 00:09:36.444 "block_size": 512, 00:09:36.444 "num_blocks": 65536, 00:09:36.444 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:36.444 "assigned_rate_limits": { 00:09:36.444 "rw_ios_per_sec": 0, 00:09:36.444 "rw_mbytes_per_sec": 0, 00:09:36.444 "r_mbytes_per_sec": 0, 00:09:36.444 "w_mbytes_per_sec": 0 00:09:36.444 }, 00:09:36.444 "claimed": true, 00:09:36.444 "claim_type": "exclusive_write", 00:09:36.444 "zoned": false, 00:09:36.444 "supported_io_types": { 00:09:36.444 "read": true, 00:09:36.444 "write": true, 00:09:36.444 "unmap": true, 00:09:36.444 "flush": true, 00:09:36.444 "reset": true, 00:09:36.444 "nvme_admin": false, 00:09:36.444 "nvme_io": false, 00:09:36.444 "nvme_io_md": false, 00:09:36.444 "write_zeroes": true, 00:09:36.444 "zcopy": true, 00:09:36.444 "get_zone_info": false, 00:09:36.444 "zone_management": false, 00:09:36.444 "zone_append": false, 00:09:36.444 "compare": false, 00:09:36.444 "compare_and_write": false, 00:09:36.444 "abort": true, 00:09:36.444 "seek_hole": false, 00:09:36.444 "seek_data": false, 00:09:36.444 "copy": true, 00:09:36.444 "nvme_iov_md": false 00:09:36.444 }, 00:09:36.444 "memory_domains": [ 00:09:36.444 { 00:09:36.444 "dma_device_id": "system", 00:09:36.444 "dma_device_type": 1 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.444 "dma_device_type": 2 00:09:36.444 } 00:09:36.444 ], 00:09:36.444 "driver_specific": {} 00:09:36.444 } 00:09:36.444 ] 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.444 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.444 "name": "Existed_Raid", 00:09:36.444 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:36.444 "strip_size_kb": 64, 00:09:36.444 "state": "online", 00:09:36.444 "raid_level": "raid0", 00:09:36.444 "superblock": true, 00:09:36.444 "num_base_bdevs": 3, 00:09:36.444 "num_base_bdevs_discovered": 3, 00:09:36.444 "num_base_bdevs_operational": 3, 00:09:36.444 "base_bdevs_list": [ 00:09:36.444 { 00:09:36.444 "name": "NewBaseBdev", 00:09:36.444 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:36.444 "is_configured": true, 00:09:36.444 "data_offset": 2048, 00:09:36.444 "data_size": 63488 00:09:36.444 }, 00:09:36.444 { 00:09:36.444 "name": "BaseBdev2", 00:09:36.444 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:36.445 "is_configured": true, 00:09:36.445 "data_offset": 2048, 00:09:36.445 "data_size": 63488 00:09:36.445 }, 00:09:36.445 { 00:09:36.445 "name": "BaseBdev3", 00:09:36.445 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:36.445 "is_configured": true, 00:09:36.445 "data_offset": 2048, 00:09:36.445 "data_size": 63488 00:09:36.445 } 00:09:36.445 ] 00:09:36.445 }' 00:09:36.445 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.445 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.012 [2024-11-20 14:26:37.903932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.012 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.012 "name": "Existed_Raid", 00:09:37.012 "aliases": [ 00:09:37.012 "e7053b16-e914-44ca-8997-2e654bb31888" 00:09:37.012 ], 00:09:37.012 "product_name": "Raid Volume", 00:09:37.012 "block_size": 512, 00:09:37.012 "num_blocks": 190464, 00:09:37.012 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:37.012 "assigned_rate_limits": { 00:09:37.012 "rw_ios_per_sec": 0, 00:09:37.012 "rw_mbytes_per_sec": 0, 00:09:37.012 "r_mbytes_per_sec": 0, 00:09:37.012 "w_mbytes_per_sec": 0 00:09:37.012 }, 00:09:37.012 "claimed": false, 00:09:37.012 "zoned": false, 00:09:37.012 "supported_io_types": { 00:09:37.012 "read": true, 00:09:37.012 "write": true, 00:09:37.012 "unmap": true, 00:09:37.012 "flush": true, 00:09:37.012 "reset": true, 00:09:37.012 "nvme_admin": false, 00:09:37.012 "nvme_io": false, 00:09:37.012 "nvme_io_md": false, 00:09:37.012 "write_zeroes": true, 00:09:37.012 "zcopy": false, 00:09:37.013 "get_zone_info": false, 00:09:37.013 "zone_management": false, 00:09:37.013 "zone_append": false, 00:09:37.013 "compare": false, 00:09:37.013 "compare_and_write": false, 00:09:37.013 "abort": false, 00:09:37.013 "seek_hole": false, 00:09:37.013 "seek_data": false, 00:09:37.013 "copy": false, 00:09:37.013 "nvme_iov_md": false 00:09:37.013 }, 00:09:37.013 "memory_domains": [ 00:09:37.013 { 00:09:37.013 "dma_device_id": "system", 00:09:37.013 "dma_device_type": 1 00:09:37.013 }, 00:09:37.013 { 00:09:37.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.013 "dma_device_type": 2 00:09:37.013 }, 00:09:37.013 { 00:09:37.013 "dma_device_id": "system", 00:09:37.013 "dma_device_type": 1 00:09:37.013 }, 00:09:37.013 { 00:09:37.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.013 "dma_device_type": 2 00:09:37.013 }, 00:09:37.013 { 00:09:37.013 "dma_device_id": "system", 00:09:37.013 "dma_device_type": 1 00:09:37.013 }, 00:09:37.013 { 00:09:37.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.013 "dma_device_type": 2 00:09:37.013 } 00:09:37.013 ], 00:09:37.013 "driver_specific": { 00:09:37.013 "raid": { 00:09:37.013 "uuid": "e7053b16-e914-44ca-8997-2e654bb31888", 00:09:37.013 "strip_size_kb": 64, 00:09:37.013 "state": "online", 00:09:37.013 "raid_level": "raid0", 00:09:37.013 "superblock": true, 00:09:37.013 "num_base_bdevs": 3, 00:09:37.013 "num_base_bdevs_discovered": 3, 00:09:37.013 "num_base_bdevs_operational": 3, 00:09:37.013 "base_bdevs_list": [ 00:09:37.013 { 00:09:37.013 "name": "NewBaseBdev", 00:09:37.013 "uuid": "e6a7f5cc-0374-45ac-ac0e-72b54cc24d9b", 00:09:37.013 "is_configured": true, 00:09:37.013 "data_offset": 2048, 00:09:37.013 "data_size": 63488 00:09:37.013 }, 00:09:37.013 { 00:09:37.013 "name": "BaseBdev2", 00:09:37.013 "uuid": "9cbbd5a9-f0d2-4010-a34c-48856926acaf", 00:09:37.013 "is_configured": true, 00:09:37.013 "data_offset": 2048, 00:09:37.013 "data_size": 63488 00:09:37.013 }, 00:09:37.013 { 00:09:37.013 "name": "BaseBdev3", 00:09:37.013 "uuid": "7a4ff7d3-3a67-4406-b436-c8f1c6ce3e06", 00:09:37.013 "is_configured": true, 00:09:37.013 "data_offset": 2048, 00:09:37.013 "data_size": 63488 00:09:37.013 } 00:09:37.013 ] 00:09:37.013 } 00:09:37.013 } 00:09:37.013 }' 00:09:37.013 14:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:37.013 BaseBdev2 00:09:37.013 BaseBdev3' 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.013 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.271 [2024-11-20 14:26:38.219686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.271 [2024-11-20 14:26:38.219723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.271 [2024-11-20 14:26:38.219838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.271 [2024-11-20 14:26:38.219914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.271 [2024-11-20 14:26:38.219936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64520 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64520 ']' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64520 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64520 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64520' 00:09:37.271 killing process with pid 64520 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64520 00:09:37.271 [2024-11-20 14:26:38.259910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.271 14:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64520 00:09:37.530 [2024-11-20 14:26:38.524414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.903 ************************************ 00:09:38.903 END TEST raid_state_function_test_sb 00:09:38.903 ************************************ 00:09:38.903 14:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.903 00:09:38.903 real 0m11.696s 00:09:38.903 user 0m19.391s 00:09:38.903 sys 0m1.613s 00:09:38.903 14:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.903 14:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.903 14:26:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:38.903 14:26:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.903 14:26:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.903 14:26:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.903 ************************************ 00:09:38.903 START TEST raid_superblock_test 00:09:38.903 ************************************ 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:38.903 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65157 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65157 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65157 ']' 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.904 14:26:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.904 [2024-11-20 14:26:39.734251] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:38.904 [2024-11-20 14:26:39.734674] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65157 ] 00:09:38.904 [2024-11-20 14:26:39.920551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.162 [2024-11-20 14:26:40.056765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.420 [2024-11-20 14:26:40.260813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.420 [2024-11-20 14:26:40.261011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.987 malloc1 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.987 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.987 [2024-11-20 14:26:40.806287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.988 [2024-11-20 14:26:40.806362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.988 [2024-11-20 14:26:40.806397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:39.988 [2024-11-20 14:26:40.806413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.988 [2024-11-20 14:26:40.809226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.988 [2024-11-20 14:26:40.809400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.988 pt1 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.988 malloc2 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.988 [2024-11-20 14:26:40.862155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.988 [2024-11-20 14:26:40.862358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.988 [2024-11-20 14:26:40.862440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:39.988 [2024-11-20 14:26:40.862563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.988 [2024-11-20 14:26:40.865356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.988 [2024-11-20 14:26:40.865507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.988 pt2 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.988 malloc3 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.988 [2024-11-20 14:26:40.925197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:39.988 [2024-11-20 14:26:40.925265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.988 [2024-11-20 14:26:40.925299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.988 [2024-11-20 14:26:40.925316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.988 [2024-11-20 14:26:40.928072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.988 [2024-11-20 14:26:40.928118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:39.988 pt3 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.988 [2024-11-20 14:26:40.937260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.988 [2024-11-20 14:26:40.939725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.988 [2024-11-20 14:26:40.939822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:39.988 [2024-11-20 14:26:40.940037] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:39.988 [2024-11-20 14:26:40.940061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.988 [2024-11-20 14:26:40.940364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:39.988 [2024-11-20 14:26:40.940568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:39.988 [2024-11-20 14:26:40.940583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:39.988 [2024-11-20 14:26:40.940782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.988 "name": "raid_bdev1", 00:09:39.988 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:39.988 "strip_size_kb": 64, 00:09:39.988 "state": "online", 00:09:39.988 "raid_level": "raid0", 00:09:39.988 "superblock": true, 00:09:39.988 "num_base_bdevs": 3, 00:09:39.988 "num_base_bdevs_discovered": 3, 00:09:39.988 "num_base_bdevs_operational": 3, 00:09:39.988 "base_bdevs_list": [ 00:09:39.988 { 00:09:39.988 "name": "pt1", 00:09:39.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.988 "is_configured": true, 00:09:39.988 "data_offset": 2048, 00:09:39.988 "data_size": 63488 00:09:39.988 }, 00:09:39.988 { 00:09:39.988 "name": "pt2", 00:09:39.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.988 "is_configured": true, 00:09:39.988 "data_offset": 2048, 00:09:39.988 "data_size": 63488 00:09:39.988 }, 00:09:39.988 { 00:09:39.988 "name": "pt3", 00:09:39.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.988 "is_configured": true, 00:09:39.988 "data_offset": 2048, 00:09:39.988 "data_size": 63488 00:09:39.988 } 00:09:39.988 ] 00:09:39.988 }' 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.988 14:26:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.555 [2024-11-20 14:26:41.461793] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.555 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.555 "name": "raid_bdev1", 00:09:40.555 "aliases": [ 00:09:40.555 "4387498c-da97-4d37-ae1e-639d021b3d31" 00:09:40.555 ], 00:09:40.555 "product_name": "Raid Volume", 00:09:40.555 "block_size": 512, 00:09:40.555 "num_blocks": 190464, 00:09:40.555 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:40.555 "assigned_rate_limits": { 00:09:40.555 "rw_ios_per_sec": 0, 00:09:40.555 "rw_mbytes_per_sec": 0, 00:09:40.555 "r_mbytes_per_sec": 0, 00:09:40.555 "w_mbytes_per_sec": 0 00:09:40.555 }, 00:09:40.556 "claimed": false, 00:09:40.556 "zoned": false, 00:09:40.556 "supported_io_types": { 00:09:40.556 "read": true, 00:09:40.556 "write": true, 00:09:40.556 "unmap": true, 00:09:40.556 "flush": true, 00:09:40.556 "reset": true, 00:09:40.556 "nvme_admin": false, 00:09:40.556 "nvme_io": false, 00:09:40.556 "nvme_io_md": false, 00:09:40.556 "write_zeroes": true, 00:09:40.556 "zcopy": false, 00:09:40.556 "get_zone_info": false, 00:09:40.556 "zone_management": false, 00:09:40.556 "zone_append": false, 00:09:40.556 "compare": false, 00:09:40.556 "compare_and_write": false, 00:09:40.556 "abort": false, 00:09:40.556 "seek_hole": false, 00:09:40.556 "seek_data": false, 00:09:40.556 "copy": false, 00:09:40.556 "nvme_iov_md": false 00:09:40.556 }, 00:09:40.556 "memory_domains": [ 00:09:40.556 { 00:09:40.556 "dma_device_id": "system", 00:09:40.556 "dma_device_type": 1 00:09:40.556 }, 00:09:40.556 { 00:09:40.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.556 "dma_device_type": 2 00:09:40.556 }, 00:09:40.556 { 00:09:40.556 "dma_device_id": "system", 00:09:40.556 "dma_device_type": 1 00:09:40.556 }, 00:09:40.556 { 00:09:40.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.556 "dma_device_type": 2 00:09:40.556 }, 00:09:40.556 { 00:09:40.556 "dma_device_id": "system", 00:09:40.556 "dma_device_type": 1 00:09:40.556 }, 00:09:40.556 { 00:09:40.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.556 "dma_device_type": 2 00:09:40.556 } 00:09:40.556 ], 00:09:40.556 "driver_specific": { 00:09:40.556 "raid": { 00:09:40.556 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:40.556 "strip_size_kb": 64, 00:09:40.556 "state": "online", 00:09:40.556 "raid_level": "raid0", 00:09:40.556 "superblock": true, 00:09:40.556 "num_base_bdevs": 3, 00:09:40.556 "num_base_bdevs_discovered": 3, 00:09:40.556 "num_base_bdevs_operational": 3, 00:09:40.556 "base_bdevs_list": [ 00:09:40.556 { 00:09:40.556 "name": "pt1", 00:09:40.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.556 "is_configured": true, 00:09:40.556 "data_offset": 2048, 00:09:40.556 "data_size": 63488 00:09:40.556 }, 00:09:40.556 { 00:09:40.556 "name": "pt2", 00:09:40.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.556 "is_configured": true, 00:09:40.556 "data_offset": 2048, 00:09:40.556 "data_size": 63488 00:09:40.556 }, 00:09:40.556 { 00:09:40.556 "name": "pt3", 00:09:40.556 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.556 "is_configured": true, 00:09:40.556 "data_offset": 2048, 00:09:40.556 "data_size": 63488 00:09:40.556 } 00:09:40.556 ] 00:09:40.556 } 00:09:40.556 } 00:09:40.556 }' 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.556 pt2 00:09:40.556 pt3' 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.556 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.814 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.815 [2024-11-20 14:26:41.769750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4387498c-da97-4d37-ae1e-639d021b3d31 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4387498c-da97-4d37-ae1e-639d021b3d31 ']' 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.815 [2024-11-20 14:26:41.813411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.815 [2024-11-20 14:26:41.813446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.815 [2024-11-20 14:26:41.813544] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.815 [2024-11-20 14:26:41.813628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.815 [2024-11-20 14:26:41.813644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.815 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.072 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.072 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:41.072 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.072 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.072 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.073 [2024-11-20 14:26:41.945497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:41.073 [2024-11-20 14:26:41.948028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:41.073 [2024-11-20 14:26:41.948103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:41.073 [2024-11-20 14:26:41.948182] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:41.073 [2024-11-20 14:26:41.948263] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:41.073 [2024-11-20 14:26:41.948298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:41.073 [2024-11-20 14:26:41.948327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.073 [2024-11-20 14:26:41.948343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:41.073 request: 00:09:41.073 { 00:09:41.073 "name": "raid_bdev1", 00:09:41.073 "raid_level": "raid0", 00:09:41.073 "base_bdevs": [ 00:09:41.073 "malloc1", 00:09:41.073 "malloc2", 00:09:41.073 "malloc3" 00:09:41.073 ], 00:09:41.073 "strip_size_kb": 64, 00:09:41.073 "superblock": false, 00:09:41.073 "method": "bdev_raid_create", 00:09:41.073 "req_id": 1 00:09:41.073 } 00:09:41.073 Got JSON-RPC error response 00:09:41.073 response: 00:09:41.073 { 00:09:41.073 "code": -17, 00:09:41.073 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:41.073 } 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:41.073 14:26:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.073 [2024-11-20 14:26:42.005466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.073 [2024-11-20 14:26:42.005700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.073 [2024-11-20 14:26:42.005868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:41.073 [2024-11-20 14:26:42.006011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.073 [2024-11-20 14:26:42.009065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.073 [2024-11-20 14:26:42.009216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.073 [2024-11-20 14:26:42.009345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:41.073 [2024-11-20 14:26:42.009417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.073 pt1 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.073 "name": "raid_bdev1", 00:09:41.073 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:41.073 "strip_size_kb": 64, 00:09:41.073 "state": "configuring", 00:09:41.073 "raid_level": "raid0", 00:09:41.073 "superblock": true, 00:09:41.073 "num_base_bdevs": 3, 00:09:41.073 "num_base_bdevs_discovered": 1, 00:09:41.073 "num_base_bdevs_operational": 3, 00:09:41.073 "base_bdevs_list": [ 00:09:41.073 { 00:09:41.073 "name": "pt1", 00:09:41.073 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.073 "is_configured": true, 00:09:41.073 "data_offset": 2048, 00:09:41.073 "data_size": 63488 00:09:41.073 }, 00:09:41.073 { 00:09:41.073 "name": null, 00:09:41.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.073 "is_configured": false, 00:09:41.073 "data_offset": 2048, 00:09:41.073 "data_size": 63488 00:09:41.073 }, 00:09:41.073 { 00:09:41.073 "name": null, 00:09:41.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.073 "is_configured": false, 00:09:41.073 "data_offset": 2048, 00:09:41.073 "data_size": 63488 00:09:41.073 } 00:09:41.073 ] 00:09:41.073 }' 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.073 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.638 [2024-11-20 14:26:42.529873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.638 [2024-11-20 14:26:42.529975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.638 [2024-11-20 14:26:42.530018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:41.638 [2024-11-20 14:26:42.530035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.638 [2024-11-20 14:26:42.530613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.638 [2024-11-20 14:26:42.530670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.638 [2024-11-20 14:26:42.530786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.638 [2024-11-20 14:26:42.530827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.638 pt2 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.638 [2024-11-20 14:26:42.537842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.638 "name": "raid_bdev1", 00:09:41.638 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:41.638 "strip_size_kb": 64, 00:09:41.638 "state": "configuring", 00:09:41.638 "raid_level": "raid0", 00:09:41.638 "superblock": true, 00:09:41.638 "num_base_bdevs": 3, 00:09:41.638 "num_base_bdevs_discovered": 1, 00:09:41.638 "num_base_bdevs_operational": 3, 00:09:41.638 "base_bdevs_list": [ 00:09:41.638 { 00:09:41.638 "name": "pt1", 00:09:41.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.638 "is_configured": true, 00:09:41.638 "data_offset": 2048, 00:09:41.638 "data_size": 63488 00:09:41.638 }, 00:09:41.638 { 00:09:41.638 "name": null, 00:09:41.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.638 "is_configured": false, 00:09:41.638 "data_offset": 0, 00:09:41.638 "data_size": 63488 00:09:41.638 }, 00:09:41.638 { 00:09:41.638 "name": null, 00:09:41.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.638 "is_configured": false, 00:09:41.638 "data_offset": 2048, 00:09:41.638 "data_size": 63488 00:09:41.638 } 00:09:41.638 ] 00:09:41.638 }' 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.638 14:26:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.204 [2024-11-20 14:26:43.070012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.204 [2024-11-20 14:26:43.070108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.204 [2024-11-20 14:26:43.070140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:42.204 [2024-11-20 14:26:43.070158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.204 [2024-11-20 14:26:43.070795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.204 [2024-11-20 14:26:43.070827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.204 [2024-11-20 14:26:43.070935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.204 [2024-11-20 14:26:43.070984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.204 pt2 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.204 [2024-11-20 14:26:43.077980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:42.204 [2024-11-20 14:26:43.078042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.204 [2024-11-20 14:26:43.078065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:42.204 [2024-11-20 14:26:43.078083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.204 [2024-11-20 14:26:43.078572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.204 [2024-11-20 14:26:43.078612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:42.204 [2024-11-20 14:26:43.078706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:42.204 [2024-11-20 14:26:43.078742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:42.204 [2024-11-20 14:26:43.078902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:42.204 [2024-11-20 14:26:43.078924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:42.204 [2024-11-20 14:26:43.079235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:42.204 [2024-11-20 14:26:43.079432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:42.204 [2024-11-20 14:26:43.079448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:42.204 [2024-11-20 14:26:43.079617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.204 pt3 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.204 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.204 "name": "raid_bdev1", 00:09:42.204 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:42.205 "strip_size_kb": 64, 00:09:42.205 "state": "online", 00:09:42.205 "raid_level": "raid0", 00:09:42.205 "superblock": true, 00:09:42.205 "num_base_bdevs": 3, 00:09:42.205 "num_base_bdevs_discovered": 3, 00:09:42.205 "num_base_bdevs_operational": 3, 00:09:42.205 "base_bdevs_list": [ 00:09:42.205 { 00:09:42.205 "name": "pt1", 00:09:42.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.205 "is_configured": true, 00:09:42.205 "data_offset": 2048, 00:09:42.205 "data_size": 63488 00:09:42.205 }, 00:09:42.205 { 00:09:42.205 "name": "pt2", 00:09:42.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.205 "is_configured": true, 00:09:42.205 "data_offset": 2048, 00:09:42.205 "data_size": 63488 00:09:42.205 }, 00:09:42.205 { 00:09:42.205 "name": "pt3", 00:09:42.205 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.205 "is_configured": true, 00:09:42.205 "data_offset": 2048, 00:09:42.205 "data_size": 63488 00:09:42.205 } 00:09:42.205 ] 00:09:42.205 }' 00:09:42.205 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.205 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.783 [2024-11-20 14:26:43.586558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.783 "name": "raid_bdev1", 00:09:42.783 "aliases": [ 00:09:42.783 "4387498c-da97-4d37-ae1e-639d021b3d31" 00:09:42.783 ], 00:09:42.783 "product_name": "Raid Volume", 00:09:42.783 "block_size": 512, 00:09:42.783 "num_blocks": 190464, 00:09:42.783 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:42.783 "assigned_rate_limits": { 00:09:42.783 "rw_ios_per_sec": 0, 00:09:42.783 "rw_mbytes_per_sec": 0, 00:09:42.783 "r_mbytes_per_sec": 0, 00:09:42.783 "w_mbytes_per_sec": 0 00:09:42.783 }, 00:09:42.783 "claimed": false, 00:09:42.783 "zoned": false, 00:09:42.783 "supported_io_types": { 00:09:42.783 "read": true, 00:09:42.783 "write": true, 00:09:42.783 "unmap": true, 00:09:42.783 "flush": true, 00:09:42.783 "reset": true, 00:09:42.783 "nvme_admin": false, 00:09:42.783 "nvme_io": false, 00:09:42.783 "nvme_io_md": false, 00:09:42.783 "write_zeroes": true, 00:09:42.783 "zcopy": false, 00:09:42.783 "get_zone_info": false, 00:09:42.783 "zone_management": false, 00:09:42.783 "zone_append": false, 00:09:42.783 "compare": false, 00:09:42.783 "compare_and_write": false, 00:09:42.783 "abort": false, 00:09:42.783 "seek_hole": false, 00:09:42.783 "seek_data": false, 00:09:42.783 "copy": false, 00:09:42.783 "nvme_iov_md": false 00:09:42.783 }, 00:09:42.783 "memory_domains": [ 00:09:42.783 { 00:09:42.783 "dma_device_id": "system", 00:09:42.783 "dma_device_type": 1 00:09:42.783 }, 00:09:42.783 { 00:09:42.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.783 "dma_device_type": 2 00:09:42.783 }, 00:09:42.783 { 00:09:42.783 "dma_device_id": "system", 00:09:42.783 "dma_device_type": 1 00:09:42.783 }, 00:09:42.783 { 00:09:42.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.783 "dma_device_type": 2 00:09:42.783 }, 00:09:42.783 { 00:09:42.783 "dma_device_id": "system", 00:09:42.783 "dma_device_type": 1 00:09:42.783 }, 00:09:42.783 { 00:09:42.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.783 "dma_device_type": 2 00:09:42.783 } 00:09:42.783 ], 00:09:42.783 "driver_specific": { 00:09:42.783 "raid": { 00:09:42.783 "uuid": "4387498c-da97-4d37-ae1e-639d021b3d31", 00:09:42.783 "strip_size_kb": 64, 00:09:42.783 "state": "online", 00:09:42.783 "raid_level": "raid0", 00:09:42.783 "superblock": true, 00:09:42.783 "num_base_bdevs": 3, 00:09:42.783 "num_base_bdevs_discovered": 3, 00:09:42.783 "num_base_bdevs_operational": 3, 00:09:42.783 "base_bdevs_list": [ 00:09:42.783 { 00:09:42.783 "name": "pt1", 00:09:42.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.783 "is_configured": true, 00:09:42.783 "data_offset": 2048, 00:09:42.783 "data_size": 63488 00:09:42.783 }, 00:09:42.783 { 00:09:42.783 "name": "pt2", 00:09:42.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.783 "is_configured": true, 00:09:42.783 "data_offset": 2048, 00:09:42.783 "data_size": 63488 00:09:42.783 }, 00:09:42.783 { 00:09:42.783 "name": "pt3", 00:09:42.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.783 "is_configured": true, 00:09:42.783 "data_offset": 2048, 00:09:42.783 "data_size": 63488 00:09:42.783 } 00:09:42.783 ] 00:09:42.783 } 00:09:42.783 } 00:09:42.783 }' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:42.783 pt2 00:09:42.783 pt3' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.783 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.042 [2024-11-20 14:26:43.906629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4387498c-da97-4d37-ae1e-639d021b3d31 '!=' 4387498c-da97-4d37-ae1e-639d021b3d31 ']' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65157 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65157 ']' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65157 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65157 00:09:43.042 killing process with pid 65157 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65157' 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65157 00:09:43.042 [2024-11-20 14:26:43.986033] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.042 [2024-11-20 14:26:43.986189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.042 14:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65157 00:09:43.042 [2024-11-20 14:26:43.986274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.042 [2024-11-20 14:26:43.986295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:43.350 [2024-11-20 14:26:44.258334] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.290 14:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:44.290 00:09:44.290 real 0m5.716s 00:09:44.290 user 0m8.619s 00:09:44.290 sys 0m0.807s 00:09:44.290 14:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.290 14:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.290 ************************************ 00:09:44.290 END TEST raid_superblock_test 00:09:44.290 ************************************ 00:09:44.548 14:26:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:44.548 14:26:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.548 14:26:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.548 14:26:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.548 ************************************ 00:09:44.548 START TEST raid_read_error_test 00:09:44.548 ************************************ 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vv72X3iKg9 00:09:44.548 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65410 00:09:44.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65410 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65410 ']' 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.549 14:26:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.549 [2024-11-20 14:26:45.531820] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:44.549 [2024-11-20 14:26:45.532445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65410 ] 00:09:44.807 [2024-11-20 14:26:45.725876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.065 [2024-11-20 14:26:45.883488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.065 [2024-11-20 14:26:46.120290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.325 [2024-11-20 14:26:46.120657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.584 BaseBdev1_malloc 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.584 true 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.584 [2024-11-20 14:26:46.603055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.584 [2024-11-20 14:26:46.603264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.584 [2024-11-20 14:26:46.603305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:45.584 [2024-11-20 14:26:46.603325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.584 [2024-11-20 14:26:46.606237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.584 [2024-11-20 14:26:46.606288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.584 BaseBdev1 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.584 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 BaseBdev2_malloc 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 true 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 [2024-11-20 14:26:46.663296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.844 [2024-11-20 14:26:46.663370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.844 [2024-11-20 14:26:46.663398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:45.844 [2024-11-20 14:26:46.663415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.844 [2024-11-20 14:26:46.666227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.844 [2024-11-20 14:26:46.666276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.844 BaseBdev2 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 BaseBdev3_malloc 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 true 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 [2024-11-20 14:26:46.729784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:45.844 [2024-11-20 14:26:46.729854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.844 [2024-11-20 14:26:46.729882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:45.844 [2024-11-20 14:26:46.729901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.844 [2024-11-20 14:26:46.732749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.844 [2024-11-20 14:26:46.732922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:45.844 BaseBdev3 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.844 [2024-11-20 14:26:46.741927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.844 [2024-11-20 14:26:46.744470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.844 [2024-11-20 14:26:46.744713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.844 [2024-11-20 14:26:46.745120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:45.844 [2024-11-20 14:26:46.745251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:45.844 [2024-11-20 14:26:46.745582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:45.844 [2024-11-20 14:26:46.745857] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:45.844 [2024-11-20 14:26:46.745883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:45.844 [2024-11-20 14:26:46.746116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.844 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.845 "name": "raid_bdev1", 00:09:45.845 "uuid": "f501efec-4d5a-4f10-bfcd-06c4d5c50c16", 00:09:45.845 "strip_size_kb": 64, 00:09:45.845 "state": "online", 00:09:45.845 "raid_level": "raid0", 00:09:45.845 "superblock": true, 00:09:45.845 "num_base_bdevs": 3, 00:09:45.845 "num_base_bdevs_discovered": 3, 00:09:45.845 "num_base_bdevs_operational": 3, 00:09:45.845 "base_bdevs_list": [ 00:09:45.845 { 00:09:45.845 "name": "BaseBdev1", 00:09:45.845 "uuid": "e83589a8-2d3a-5405-aa7d-a131ba8a1cfa", 00:09:45.845 "is_configured": true, 00:09:45.845 "data_offset": 2048, 00:09:45.845 "data_size": 63488 00:09:45.845 }, 00:09:45.845 { 00:09:45.845 "name": "BaseBdev2", 00:09:45.845 "uuid": "6c984bbc-2d2e-5d2d-bf27-fe89b77312a6", 00:09:45.845 "is_configured": true, 00:09:45.845 "data_offset": 2048, 00:09:45.845 "data_size": 63488 00:09:45.845 }, 00:09:45.845 { 00:09:45.845 "name": "BaseBdev3", 00:09:45.845 "uuid": "4256397c-90e4-5d45-a4fe-55812b85865a", 00:09:45.845 "is_configured": true, 00:09:45.845 "data_offset": 2048, 00:09:45.845 "data_size": 63488 00:09:45.845 } 00:09:45.845 ] 00:09:45.845 }' 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.845 14:26:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.411 14:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.411 14:26:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:46.411 [2024-11-20 14:26:47.383700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.439 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.439 "name": "raid_bdev1", 00:09:47.439 "uuid": "f501efec-4d5a-4f10-bfcd-06c4d5c50c16", 00:09:47.439 "strip_size_kb": 64, 00:09:47.439 "state": "online", 00:09:47.439 "raid_level": "raid0", 00:09:47.439 "superblock": true, 00:09:47.439 "num_base_bdevs": 3, 00:09:47.439 "num_base_bdevs_discovered": 3, 00:09:47.439 "num_base_bdevs_operational": 3, 00:09:47.439 "base_bdevs_list": [ 00:09:47.439 { 00:09:47.439 "name": "BaseBdev1", 00:09:47.439 "uuid": "e83589a8-2d3a-5405-aa7d-a131ba8a1cfa", 00:09:47.439 "is_configured": true, 00:09:47.439 "data_offset": 2048, 00:09:47.439 "data_size": 63488 00:09:47.439 }, 00:09:47.439 { 00:09:47.439 "name": "BaseBdev2", 00:09:47.439 "uuid": "6c984bbc-2d2e-5d2d-bf27-fe89b77312a6", 00:09:47.439 "is_configured": true, 00:09:47.439 "data_offset": 2048, 00:09:47.439 "data_size": 63488 00:09:47.439 }, 00:09:47.440 { 00:09:47.440 "name": "BaseBdev3", 00:09:47.440 "uuid": "4256397c-90e4-5d45-a4fe-55812b85865a", 00:09:47.440 "is_configured": true, 00:09:47.440 "data_offset": 2048, 00:09:47.440 "data_size": 63488 00:09:47.440 } 00:09:47.440 ] 00:09:47.440 }' 00:09:47.440 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.440 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.006 [2024-11-20 14:26:48.779092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.006 [2024-11-20 14:26:48.779259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.006 [2024-11-20 14:26:48.782722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.006 [2024-11-20 14:26:48.782782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.006 [2024-11-20 14:26:48.782840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.006 [2024-11-20 14:26:48.782855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:48.006 { 00:09:48.006 "results": [ 00:09:48.006 { 00:09:48.006 "job": "raid_bdev1", 00:09:48.006 "core_mask": "0x1", 00:09:48.006 "workload": "randrw", 00:09:48.006 "percentage": 50, 00:09:48.006 "status": "finished", 00:09:48.006 "queue_depth": 1, 00:09:48.006 "io_size": 131072, 00:09:48.006 "runtime": 1.392834, 00:09:48.006 "iops": 10483.661369553012, 00:09:48.006 "mibps": 1310.4576711941265, 00:09:48.006 "io_failed": 1, 00:09:48.006 "io_timeout": 0, 00:09:48.006 "avg_latency_us": 133.32400092135487, 00:09:48.006 "min_latency_us": 42.589090909090906, 00:09:48.006 "max_latency_us": 1839.4763636363637 00:09:48.006 } 00:09:48.006 ], 00:09:48.006 "core_count": 1 00:09:48.006 } 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65410 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65410 ']' 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65410 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.006 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65410 00:09:48.007 killing process with pid 65410 00:09:48.007 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.007 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.007 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65410' 00:09:48.007 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65410 00:09:48.007 [2024-11-20 14:26:48.815389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.007 14:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65410 00:09:48.007 [2024-11-20 14:26:49.023504] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vv72X3iKg9 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:49.381 ************************************ 00:09:49.381 END TEST raid_read_error_test 00:09:49.381 ************************************ 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:49.381 00:09:49.381 real 0m4.735s 00:09:49.381 user 0m5.888s 00:09:49.381 sys 0m0.590s 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.381 14:26:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.381 14:26:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:49.381 14:26:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:49.381 14:26:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.381 14:26:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.381 ************************************ 00:09:49.381 START TEST raid_write_error_test 00:09:49.381 ************************************ 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.D7uzHM9XvW 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65561 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65561 00:09:49.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65561 ']' 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.381 14:26:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.381 [2024-11-20 14:26:50.286275] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:49.381 [2024-11-20 14:26:50.286440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65561 ] 00:09:49.639 [2024-11-20 14:26:50.469422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.639 [2024-11-20 14:26:50.630119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.897 [2024-11-20 14:26:50.869478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.897 [2024-11-20 14:26:50.869569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 BaseBdev1_malloc 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 true 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 [2024-11-20 14:26:51.396919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:50.464 [2024-11-20 14:26:51.396990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.464 [2024-11-20 14:26:51.397019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:50.464 [2024-11-20 14:26:51.397037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.464 [2024-11-20 14:26:51.399871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.464 [2024-11-20 14:26:51.399922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:50.464 BaseBdev1 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 BaseBdev2_malloc 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 true 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.464 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.464 [2024-11-20 14:26:51.453274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:50.464 [2024-11-20 14:26:51.453346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.464 [2024-11-20 14:26:51.453383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:50.464 [2024-11-20 14:26:51.453402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.465 [2024-11-20 14:26:51.456272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.465 [2024-11-20 14:26:51.456322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:50.465 BaseBdev2 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.465 BaseBdev3_malloc 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.465 true 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.465 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.465 [2024-11-20 14:26:51.517557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.465 [2024-11-20 14:26:51.517642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.465 [2024-11-20 14:26:51.517671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:50.465 [2024-11-20 14:26:51.517700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.723 [2024-11-20 14:26:51.520520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.723 [2024-11-20 14:26:51.520571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.723 BaseBdev3 00:09:50.723 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.723 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:50.723 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.723 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.723 [2024-11-20 14:26:51.525674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.723 [2024-11-20 14:26:51.528238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.723 [2024-11-20 14:26:51.528341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.723 [2024-11-20 14:26:51.528609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:50.723 [2024-11-20 14:26:51.528677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.723 [2024-11-20 14:26:51.528998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:50.723 [2024-11-20 14:26:51.529271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:50.723 [2024-11-20 14:26:51.529301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:50.723 [2024-11-20 14:26:51.529540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.723 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.724 "name": "raid_bdev1", 00:09:50.724 "uuid": "64bf4ae1-f422-4c8d-9186-31e952e51a0c", 00:09:50.724 "strip_size_kb": 64, 00:09:50.724 "state": "online", 00:09:50.724 "raid_level": "raid0", 00:09:50.724 "superblock": true, 00:09:50.724 "num_base_bdevs": 3, 00:09:50.724 "num_base_bdevs_discovered": 3, 00:09:50.724 "num_base_bdevs_operational": 3, 00:09:50.724 "base_bdevs_list": [ 00:09:50.724 { 00:09:50.724 "name": "BaseBdev1", 00:09:50.724 "uuid": "227c0657-6ae6-5df6-897f-f35a38e667c6", 00:09:50.724 "is_configured": true, 00:09:50.724 "data_offset": 2048, 00:09:50.724 "data_size": 63488 00:09:50.724 }, 00:09:50.724 { 00:09:50.724 "name": "BaseBdev2", 00:09:50.724 "uuid": "f8a20ea1-6740-5621-b6f1-6bf662a5cc67", 00:09:50.724 "is_configured": true, 00:09:50.724 "data_offset": 2048, 00:09:50.724 "data_size": 63488 00:09:50.724 }, 00:09:50.724 { 00:09:50.724 "name": "BaseBdev3", 00:09:50.724 "uuid": "e1e6c966-ff52-51f8-ad05-511a57133c10", 00:09:50.724 "is_configured": true, 00:09:50.724 "data_offset": 2048, 00:09:50.724 "data_size": 63488 00:09:50.724 } 00:09:50.724 ] 00:09:50.724 }' 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.724 14:26:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.296 14:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:51.296 14:26:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:51.296 [2024-11-20 14:26:52.183266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.245 "name": "raid_bdev1", 00:09:52.245 "uuid": "64bf4ae1-f422-4c8d-9186-31e952e51a0c", 00:09:52.245 "strip_size_kb": 64, 00:09:52.245 "state": "online", 00:09:52.245 "raid_level": "raid0", 00:09:52.245 "superblock": true, 00:09:52.245 "num_base_bdevs": 3, 00:09:52.245 "num_base_bdevs_discovered": 3, 00:09:52.245 "num_base_bdevs_operational": 3, 00:09:52.245 "base_bdevs_list": [ 00:09:52.245 { 00:09:52.245 "name": "BaseBdev1", 00:09:52.245 "uuid": "227c0657-6ae6-5df6-897f-f35a38e667c6", 00:09:52.245 "is_configured": true, 00:09:52.245 "data_offset": 2048, 00:09:52.245 "data_size": 63488 00:09:52.245 }, 00:09:52.245 { 00:09:52.245 "name": "BaseBdev2", 00:09:52.245 "uuid": "f8a20ea1-6740-5621-b6f1-6bf662a5cc67", 00:09:52.245 "is_configured": true, 00:09:52.245 "data_offset": 2048, 00:09:52.245 "data_size": 63488 00:09:52.245 }, 00:09:52.245 { 00:09:52.245 "name": "BaseBdev3", 00:09:52.245 "uuid": "e1e6c966-ff52-51f8-ad05-511a57133c10", 00:09:52.245 "is_configured": true, 00:09:52.245 "data_offset": 2048, 00:09:52.245 "data_size": 63488 00:09:52.245 } 00:09:52.245 ] 00:09:52.245 }' 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.245 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.813 [2024-11-20 14:26:53.571321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.813 [2024-11-20 14:26:53.571488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.813 [2024-11-20 14:26:53.574941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.813 [2024-11-20 14:26:53.575123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.813 [2024-11-20 14:26:53.575198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.813 [2024-11-20 14:26:53.575216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:52.813 { 00:09:52.813 "results": [ 00:09:52.813 { 00:09:52.813 "job": "raid_bdev1", 00:09:52.813 "core_mask": "0x1", 00:09:52.813 "workload": "randrw", 00:09:52.813 "percentage": 50, 00:09:52.813 "status": "finished", 00:09:52.813 "queue_depth": 1, 00:09:52.813 "io_size": 131072, 00:09:52.813 "runtime": 1.385588, 00:09:52.813 "iops": 10621.483442408566, 00:09:52.813 "mibps": 1327.6854303010707, 00:09:52.813 "io_failed": 1, 00:09:52.813 "io_timeout": 0, 00:09:52.813 "avg_latency_us": 131.49156135344478, 00:09:52.813 "min_latency_us": 28.392727272727274, 00:09:52.813 "max_latency_us": 1921.3963636363637 00:09:52.813 } 00:09:52.813 ], 00:09:52.813 "core_count": 1 00:09:52.813 } 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65561 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65561 ']' 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65561 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65561 00:09:52.813 killing process with pid 65561 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65561' 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65561 00:09:52.813 14:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65561 00:09:52.813 [2024-11-20 14:26:53.623220] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.813 [2024-11-20 14:26:53.835576] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.D7uzHM9XvW 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:54.189 00:09:54.189 real 0m4.800s 00:09:54.189 user 0m5.981s 00:09:54.189 sys 0m0.597s 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.189 ************************************ 00:09:54.189 END TEST raid_write_error_test 00:09:54.189 ************************************ 00:09:54.189 14:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.189 14:26:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:54.189 14:26:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:54.189 14:26:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:54.189 14:26:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.189 14:26:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.189 ************************************ 00:09:54.189 START TEST raid_state_function_test 00:09:54.189 ************************************ 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:54.189 Process raid pid: 65705 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65705 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65705' 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65705 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65705 ']' 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.189 14:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.190 14:26:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.190 [2024-11-20 14:26:55.153302] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:09:54.190 [2024-11-20 14:26:55.153491] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.448 [2024-11-20 14:26:55.339161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.448 [2024-11-20 14:26:55.475480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.706 [2024-11-20 14:26:55.688111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.706 [2024-11-20 14:26:55.688184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.272 [2024-11-20 14:26:56.238606] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.272 [2024-11-20 14:26:56.238695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.272 [2024-11-20 14:26:56.238714] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.272 [2024-11-20 14:26:56.238732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.272 [2024-11-20 14:26:56.238742] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.272 [2024-11-20 14:26:56.238759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.272 "name": "Existed_Raid", 00:09:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.272 "strip_size_kb": 64, 00:09:55.272 "state": "configuring", 00:09:55.272 "raid_level": "concat", 00:09:55.272 "superblock": false, 00:09:55.272 "num_base_bdevs": 3, 00:09:55.272 "num_base_bdevs_discovered": 0, 00:09:55.272 "num_base_bdevs_operational": 3, 00:09:55.272 "base_bdevs_list": [ 00:09:55.272 { 00:09:55.272 "name": "BaseBdev1", 00:09:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.272 "is_configured": false, 00:09:55.272 "data_offset": 0, 00:09:55.272 "data_size": 0 00:09:55.272 }, 00:09:55.272 { 00:09:55.272 "name": "BaseBdev2", 00:09:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.272 "is_configured": false, 00:09:55.272 "data_offset": 0, 00:09:55.272 "data_size": 0 00:09:55.272 }, 00:09:55.272 { 00:09:55.272 "name": "BaseBdev3", 00:09:55.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.272 "is_configured": false, 00:09:55.272 "data_offset": 0, 00:09:55.272 "data_size": 0 00:09:55.272 } 00:09:55.272 ] 00:09:55.272 }' 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.272 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 [2024-11-20 14:26:56.790690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.857 [2024-11-20 14:26:56.790743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 [2024-11-20 14:26:56.798665] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.857 [2024-11-20 14:26:56.798730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.857 [2024-11-20 14:26:56.798746] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.857 [2024-11-20 14:26:56.798763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.857 [2024-11-20 14:26:56.798773] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.857 [2024-11-20 14:26:56.798788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 [2024-11-20 14:26:56.845590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.857 BaseBdev1 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.857 [ 00:09:55.857 { 00:09:55.857 "name": "BaseBdev1", 00:09:55.857 "aliases": [ 00:09:55.857 "c548c771-46e5-40e8-b903-7003c28829f7" 00:09:55.857 ], 00:09:55.857 "product_name": "Malloc disk", 00:09:55.857 "block_size": 512, 00:09:55.857 "num_blocks": 65536, 00:09:55.857 "uuid": "c548c771-46e5-40e8-b903-7003c28829f7", 00:09:55.857 "assigned_rate_limits": { 00:09:55.857 "rw_ios_per_sec": 0, 00:09:55.857 "rw_mbytes_per_sec": 0, 00:09:55.857 "r_mbytes_per_sec": 0, 00:09:55.857 "w_mbytes_per_sec": 0 00:09:55.857 }, 00:09:55.857 "claimed": true, 00:09:55.857 "claim_type": "exclusive_write", 00:09:55.857 "zoned": false, 00:09:55.857 "supported_io_types": { 00:09:55.857 "read": true, 00:09:55.857 "write": true, 00:09:55.857 "unmap": true, 00:09:55.857 "flush": true, 00:09:55.857 "reset": true, 00:09:55.857 "nvme_admin": false, 00:09:55.857 "nvme_io": false, 00:09:55.857 "nvme_io_md": false, 00:09:55.857 "write_zeroes": true, 00:09:55.857 "zcopy": true, 00:09:55.857 "get_zone_info": false, 00:09:55.857 "zone_management": false, 00:09:55.857 "zone_append": false, 00:09:55.857 "compare": false, 00:09:55.857 "compare_and_write": false, 00:09:55.857 "abort": true, 00:09:55.857 "seek_hole": false, 00:09:55.857 "seek_data": false, 00:09:55.857 "copy": true, 00:09:55.857 "nvme_iov_md": false 00:09:55.857 }, 00:09:55.857 "memory_domains": [ 00:09:55.857 { 00:09:55.857 "dma_device_id": "system", 00:09:55.857 "dma_device_type": 1 00:09:55.857 }, 00:09:55.857 { 00:09:55.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.857 "dma_device_type": 2 00:09:55.857 } 00:09:55.857 ], 00:09:55.857 "driver_specific": {} 00:09:55.857 } 00:09:55.857 ] 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.857 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.858 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.858 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.858 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.116 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.116 "name": "Existed_Raid", 00:09:56.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.116 "strip_size_kb": 64, 00:09:56.116 "state": "configuring", 00:09:56.116 "raid_level": "concat", 00:09:56.116 "superblock": false, 00:09:56.116 "num_base_bdevs": 3, 00:09:56.116 "num_base_bdevs_discovered": 1, 00:09:56.116 "num_base_bdevs_operational": 3, 00:09:56.116 "base_bdevs_list": [ 00:09:56.116 { 00:09:56.116 "name": "BaseBdev1", 00:09:56.116 "uuid": "c548c771-46e5-40e8-b903-7003c28829f7", 00:09:56.116 "is_configured": true, 00:09:56.116 "data_offset": 0, 00:09:56.116 "data_size": 65536 00:09:56.116 }, 00:09:56.116 { 00:09:56.116 "name": "BaseBdev2", 00:09:56.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.116 "is_configured": false, 00:09:56.116 "data_offset": 0, 00:09:56.116 "data_size": 0 00:09:56.116 }, 00:09:56.116 { 00:09:56.116 "name": "BaseBdev3", 00:09:56.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.116 "is_configured": false, 00:09:56.116 "data_offset": 0, 00:09:56.116 "data_size": 0 00:09:56.116 } 00:09:56.116 ] 00:09:56.116 }' 00:09:56.116 14:26:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.116 14:26:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.403 [2024-11-20 14:26:57.425870] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.403 [2024-11-20 14:26:57.425944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.403 [2024-11-20 14:26:57.433904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.403 [2024-11-20 14:26:57.436393] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.403 [2024-11-20 14:26:57.436590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.403 [2024-11-20 14:26:57.436617] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.403 [2024-11-20 14:26:57.436658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.403 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.404 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.404 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.404 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.404 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.404 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.661 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.661 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.661 "name": "Existed_Raid", 00:09:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.661 "strip_size_kb": 64, 00:09:56.661 "state": "configuring", 00:09:56.661 "raid_level": "concat", 00:09:56.661 "superblock": false, 00:09:56.661 "num_base_bdevs": 3, 00:09:56.661 "num_base_bdevs_discovered": 1, 00:09:56.661 "num_base_bdevs_operational": 3, 00:09:56.661 "base_bdevs_list": [ 00:09:56.661 { 00:09:56.661 "name": "BaseBdev1", 00:09:56.661 "uuid": "c548c771-46e5-40e8-b903-7003c28829f7", 00:09:56.661 "is_configured": true, 00:09:56.661 "data_offset": 0, 00:09:56.661 "data_size": 65536 00:09:56.661 }, 00:09:56.661 { 00:09:56.661 "name": "BaseBdev2", 00:09:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.661 "is_configured": false, 00:09:56.661 "data_offset": 0, 00:09:56.661 "data_size": 0 00:09:56.661 }, 00:09:56.661 { 00:09:56.661 "name": "BaseBdev3", 00:09:56.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.661 "is_configured": false, 00:09:56.661 "data_offset": 0, 00:09:56.661 "data_size": 0 00:09:56.661 } 00:09:56.661 ] 00:09:56.661 }' 00:09:56.661 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.661 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.919 14:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.919 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.919 14:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.178 [2024-11-20 14:26:58.013214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.178 BaseBdev2 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.178 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.178 [ 00:09:57.178 { 00:09:57.178 "name": "BaseBdev2", 00:09:57.178 "aliases": [ 00:09:57.178 "98fc126c-8ca3-4332-95ae-4d6c0170be9b" 00:09:57.178 ], 00:09:57.178 "product_name": "Malloc disk", 00:09:57.178 "block_size": 512, 00:09:57.178 "num_blocks": 65536, 00:09:57.178 "uuid": "98fc126c-8ca3-4332-95ae-4d6c0170be9b", 00:09:57.178 "assigned_rate_limits": { 00:09:57.178 "rw_ios_per_sec": 0, 00:09:57.178 "rw_mbytes_per_sec": 0, 00:09:57.179 "r_mbytes_per_sec": 0, 00:09:57.179 "w_mbytes_per_sec": 0 00:09:57.179 }, 00:09:57.179 "claimed": true, 00:09:57.179 "claim_type": "exclusive_write", 00:09:57.179 "zoned": false, 00:09:57.179 "supported_io_types": { 00:09:57.179 "read": true, 00:09:57.179 "write": true, 00:09:57.179 "unmap": true, 00:09:57.179 "flush": true, 00:09:57.179 "reset": true, 00:09:57.179 "nvme_admin": false, 00:09:57.179 "nvme_io": false, 00:09:57.179 "nvme_io_md": false, 00:09:57.179 "write_zeroes": true, 00:09:57.179 "zcopy": true, 00:09:57.179 "get_zone_info": false, 00:09:57.179 "zone_management": false, 00:09:57.179 "zone_append": false, 00:09:57.179 "compare": false, 00:09:57.179 "compare_and_write": false, 00:09:57.179 "abort": true, 00:09:57.179 "seek_hole": false, 00:09:57.179 "seek_data": false, 00:09:57.179 "copy": true, 00:09:57.179 "nvme_iov_md": false 00:09:57.179 }, 00:09:57.179 "memory_domains": [ 00:09:57.179 { 00:09:57.179 "dma_device_id": "system", 00:09:57.179 "dma_device_type": 1 00:09:57.179 }, 00:09:57.179 { 00:09:57.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.179 "dma_device_type": 2 00:09:57.179 } 00:09:57.179 ], 00:09:57.179 "driver_specific": {} 00:09:57.179 } 00:09:57.179 ] 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.179 "name": "Existed_Raid", 00:09:57.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.179 "strip_size_kb": 64, 00:09:57.179 "state": "configuring", 00:09:57.179 "raid_level": "concat", 00:09:57.179 "superblock": false, 00:09:57.179 "num_base_bdevs": 3, 00:09:57.179 "num_base_bdevs_discovered": 2, 00:09:57.179 "num_base_bdevs_operational": 3, 00:09:57.179 "base_bdevs_list": [ 00:09:57.179 { 00:09:57.179 "name": "BaseBdev1", 00:09:57.179 "uuid": "c548c771-46e5-40e8-b903-7003c28829f7", 00:09:57.179 "is_configured": true, 00:09:57.179 "data_offset": 0, 00:09:57.179 "data_size": 65536 00:09:57.179 }, 00:09:57.179 { 00:09:57.179 "name": "BaseBdev2", 00:09:57.179 "uuid": "98fc126c-8ca3-4332-95ae-4d6c0170be9b", 00:09:57.179 "is_configured": true, 00:09:57.179 "data_offset": 0, 00:09:57.179 "data_size": 65536 00:09:57.179 }, 00:09:57.179 { 00:09:57.179 "name": "BaseBdev3", 00:09:57.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.179 "is_configured": false, 00:09:57.179 "data_offset": 0, 00:09:57.179 "data_size": 0 00:09:57.179 } 00:09:57.179 ] 00:09:57.179 }' 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.179 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.746 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.746 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.746 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.746 [2024-11-20 14:26:58.612565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.746 [2024-11-20 14:26:58.612878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:57.746 [2024-11-20 14:26:58.612918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:57.747 [2024-11-20 14:26:58.613262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:57.747 [2024-11-20 14:26:58.613506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:57.747 [2024-11-20 14:26:58.613523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:57.747 [2024-11-20 14:26:58.613890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.747 BaseBdev3 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.747 [ 00:09:57.747 { 00:09:57.747 "name": "BaseBdev3", 00:09:57.747 "aliases": [ 00:09:57.747 "ae7864ce-ac89-45d0-a375-ff1d90929edc" 00:09:57.747 ], 00:09:57.747 "product_name": "Malloc disk", 00:09:57.747 "block_size": 512, 00:09:57.747 "num_blocks": 65536, 00:09:57.747 "uuid": "ae7864ce-ac89-45d0-a375-ff1d90929edc", 00:09:57.747 "assigned_rate_limits": { 00:09:57.747 "rw_ios_per_sec": 0, 00:09:57.747 "rw_mbytes_per_sec": 0, 00:09:57.747 "r_mbytes_per_sec": 0, 00:09:57.747 "w_mbytes_per_sec": 0 00:09:57.747 }, 00:09:57.747 "claimed": true, 00:09:57.747 "claim_type": "exclusive_write", 00:09:57.747 "zoned": false, 00:09:57.747 "supported_io_types": { 00:09:57.747 "read": true, 00:09:57.747 "write": true, 00:09:57.747 "unmap": true, 00:09:57.747 "flush": true, 00:09:57.747 "reset": true, 00:09:57.747 "nvme_admin": false, 00:09:57.747 "nvme_io": false, 00:09:57.747 "nvme_io_md": false, 00:09:57.747 "write_zeroes": true, 00:09:57.747 "zcopy": true, 00:09:57.747 "get_zone_info": false, 00:09:57.747 "zone_management": false, 00:09:57.747 "zone_append": false, 00:09:57.747 "compare": false, 00:09:57.747 "compare_and_write": false, 00:09:57.747 "abort": true, 00:09:57.747 "seek_hole": false, 00:09:57.747 "seek_data": false, 00:09:57.747 "copy": true, 00:09:57.747 "nvme_iov_md": false 00:09:57.747 }, 00:09:57.747 "memory_domains": [ 00:09:57.747 { 00:09:57.747 "dma_device_id": "system", 00:09:57.747 "dma_device_type": 1 00:09:57.747 }, 00:09:57.747 { 00:09:57.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.747 "dma_device_type": 2 00:09:57.747 } 00:09:57.747 ], 00:09:57.747 "driver_specific": {} 00:09:57.747 } 00:09:57.747 ] 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.747 "name": "Existed_Raid", 00:09:57.747 "uuid": "263e34a8-e7c0-47c6-852b-826ed0382627", 00:09:57.747 "strip_size_kb": 64, 00:09:57.747 "state": "online", 00:09:57.747 "raid_level": "concat", 00:09:57.747 "superblock": false, 00:09:57.747 "num_base_bdevs": 3, 00:09:57.747 "num_base_bdevs_discovered": 3, 00:09:57.747 "num_base_bdevs_operational": 3, 00:09:57.747 "base_bdevs_list": [ 00:09:57.747 { 00:09:57.747 "name": "BaseBdev1", 00:09:57.747 "uuid": "c548c771-46e5-40e8-b903-7003c28829f7", 00:09:57.747 "is_configured": true, 00:09:57.747 "data_offset": 0, 00:09:57.747 "data_size": 65536 00:09:57.747 }, 00:09:57.747 { 00:09:57.747 "name": "BaseBdev2", 00:09:57.747 "uuid": "98fc126c-8ca3-4332-95ae-4d6c0170be9b", 00:09:57.747 "is_configured": true, 00:09:57.747 "data_offset": 0, 00:09:57.747 "data_size": 65536 00:09:57.747 }, 00:09:57.747 { 00:09:57.747 "name": "BaseBdev3", 00:09:57.747 "uuid": "ae7864ce-ac89-45d0-a375-ff1d90929edc", 00:09:57.747 "is_configured": true, 00:09:57.747 "data_offset": 0, 00:09:57.747 "data_size": 65536 00:09:57.747 } 00:09:57.747 ] 00:09:57.747 }' 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.747 14:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.316 [2024-11-20 14:26:59.161386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.316 "name": "Existed_Raid", 00:09:58.316 "aliases": [ 00:09:58.316 "263e34a8-e7c0-47c6-852b-826ed0382627" 00:09:58.316 ], 00:09:58.316 "product_name": "Raid Volume", 00:09:58.316 "block_size": 512, 00:09:58.316 "num_blocks": 196608, 00:09:58.316 "uuid": "263e34a8-e7c0-47c6-852b-826ed0382627", 00:09:58.316 "assigned_rate_limits": { 00:09:58.316 "rw_ios_per_sec": 0, 00:09:58.316 "rw_mbytes_per_sec": 0, 00:09:58.316 "r_mbytes_per_sec": 0, 00:09:58.316 "w_mbytes_per_sec": 0 00:09:58.316 }, 00:09:58.316 "claimed": false, 00:09:58.316 "zoned": false, 00:09:58.316 "supported_io_types": { 00:09:58.316 "read": true, 00:09:58.316 "write": true, 00:09:58.316 "unmap": true, 00:09:58.316 "flush": true, 00:09:58.316 "reset": true, 00:09:58.316 "nvme_admin": false, 00:09:58.316 "nvme_io": false, 00:09:58.316 "nvme_io_md": false, 00:09:58.316 "write_zeroes": true, 00:09:58.316 "zcopy": false, 00:09:58.316 "get_zone_info": false, 00:09:58.316 "zone_management": false, 00:09:58.316 "zone_append": false, 00:09:58.316 "compare": false, 00:09:58.316 "compare_and_write": false, 00:09:58.316 "abort": false, 00:09:58.316 "seek_hole": false, 00:09:58.316 "seek_data": false, 00:09:58.316 "copy": false, 00:09:58.316 "nvme_iov_md": false 00:09:58.316 }, 00:09:58.316 "memory_domains": [ 00:09:58.316 { 00:09:58.316 "dma_device_id": "system", 00:09:58.316 "dma_device_type": 1 00:09:58.316 }, 00:09:58.316 { 00:09:58.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.316 "dma_device_type": 2 00:09:58.316 }, 00:09:58.316 { 00:09:58.316 "dma_device_id": "system", 00:09:58.316 "dma_device_type": 1 00:09:58.316 }, 00:09:58.316 { 00:09:58.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.316 "dma_device_type": 2 00:09:58.316 }, 00:09:58.316 { 00:09:58.316 "dma_device_id": "system", 00:09:58.316 "dma_device_type": 1 00:09:58.316 }, 00:09:58.316 { 00:09:58.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.316 "dma_device_type": 2 00:09:58.316 } 00:09:58.316 ], 00:09:58.316 "driver_specific": { 00:09:58.316 "raid": { 00:09:58.316 "uuid": "263e34a8-e7c0-47c6-852b-826ed0382627", 00:09:58.316 "strip_size_kb": 64, 00:09:58.316 "state": "online", 00:09:58.316 "raid_level": "concat", 00:09:58.316 "superblock": false, 00:09:58.316 "num_base_bdevs": 3, 00:09:58.316 "num_base_bdevs_discovered": 3, 00:09:58.316 "num_base_bdevs_operational": 3, 00:09:58.316 "base_bdevs_list": [ 00:09:58.316 { 00:09:58.316 "name": "BaseBdev1", 00:09:58.316 "uuid": "c548c771-46e5-40e8-b903-7003c28829f7", 00:09:58.316 "is_configured": true, 00:09:58.316 "data_offset": 0, 00:09:58.316 "data_size": 65536 00:09:58.316 }, 00:09:58.316 { 00:09:58.316 "name": "BaseBdev2", 00:09:58.316 "uuid": "98fc126c-8ca3-4332-95ae-4d6c0170be9b", 00:09:58.316 "is_configured": true, 00:09:58.316 "data_offset": 0, 00:09:58.316 "data_size": 65536 00:09:58.316 }, 00:09:58.316 { 00:09:58.316 "name": "BaseBdev3", 00:09:58.316 "uuid": "ae7864ce-ac89-45d0-a375-ff1d90929edc", 00:09:58.316 "is_configured": true, 00:09:58.316 "data_offset": 0, 00:09:58.316 "data_size": 65536 00:09:58.316 } 00:09:58.316 ] 00:09:58.316 } 00:09:58.316 } 00:09:58.316 }' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:58.316 BaseBdev2 00:09:58.316 BaseBdev3' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.316 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.576 [2024-11-20 14:26:59.469007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.576 [2024-11-20 14:26:59.469046] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.576 [2024-11-20 14:26:59.469120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.576 "name": "Existed_Raid", 00:09:58.576 "uuid": "263e34a8-e7c0-47c6-852b-826ed0382627", 00:09:58.576 "strip_size_kb": 64, 00:09:58.576 "state": "offline", 00:09:58.576 "raid_level": "concat", 00:09:58.576 "superblock": false, 00:09:58.576 "num_base_bdevs": 3, 00:09:58.576 "num_base_bdevs_discovered": 2, 00:09:58.576 "num_base_bdevs_operational": 2, 00:09:58.576 "base_bdevs_list": [ 00:09:58.576 { 00:09:58.576 "name": null, 00:09:58.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.576 "is_configured": false, 00:09:58.576 "data_offset": 0, 00:09:58.576 "data_size": 65536 00:09:58.576 }, 00:09:58.576 { 00:09:58.576 "name": "BaseBdev2", 00:09:58.576 "uuid": "98fc126c-8ca3-4332-95ae-4d6c0170be9b", 00:09:58.576 "is_configured": true, 00:09:58.576 "data_offset": 0, 00:09:58.576 "data_size": 65536 00:09:58.576 }, 00:09:58.576 { 00:09:58.576 "name": "BaseBdev3", 00:09:58.576 "uuid": "ae7864ce-ac89-45d0-a375-ff1d90929edc", 00:09:58.576 "is_configured": true, 00:09:58.576 "data_offset": 0, 00:09:58.576 "data_size": 65536 00:09:58.576 } 00:09:58.576 ] 00:09:58.576 }' 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.576 14:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.142 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.142 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.142 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.142 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.143 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.143 [2024-11-20 14:27:00.175468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.400 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.401 [2024-11-20 14:27:00.326152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.401 [2024-11-20 14:27:00.326222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.401 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.659 BaseBdev2 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.659 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.659 [ 00:09:59.659 { 00:09:59.659 "name": "BaseBdev2", 00:09:59.659 "aliases": [ 00:09:59.659 "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b" 00:09:59.659 ], 00:09:59.659 "product_name": "Malloc disk", 00:09:59.659 "block_size": 512, 00:09:59.659 "num_blocks": 65536, 00:09:59.659 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:09:59.659 "assigned_rate_limits": { 00:09:59.659 "rw_ios_per_sec": 0, 00:09:59.659 "rw_mbytes_per_sec": 0, 00:09:59.659 "r_mbytes_per_sec": 0, 00:09:59.659 "w_mbytes_per_sec": 0 00:09:59.659 }, 00:09:59.659 "claimed": false, 00:09:59.659 "zoned": false, 00:09:59.659 "supported_io_types": { 00:09:59.659 "read": true, 00:09:59.659 "write": true, 00:09:59.659 "unmap": true, 00:09:59.660 "flush": true, 00:09:59.660 "reset": true, 00:09:59.660 "nvme_admin": false, 00:09:59.660 "nvme_io": false, 00:09:59.660 "nvme_io_md": false, 00:09:59.660 "write_zeroes": true, 00:09:59.660 "zcopy": true, 00:09:59.660 "get_zone_info": false, 00:09:59.660 "zone_management": false, 00:09:59.660 "zone_append": false, 00:09:59.660 "compare": false, 00:09:59.660 "compare_and_write": false, 00:09:59.660 "abort": true, 00:09:59.660 "seek_hole": false, 00:09:59.660 "seek_data": false, 00:09:59.660 "copy": true, 00:09:59.660 "nvme_iov_md": false 00:09:59.660 }, 00:09:59.660 "memory_domains": [ 00:09:59.660 { 00:09:59.660 "dma_device_id": "system", 00:09:59.660 "dma_device_type": 1 00:09:59.660 }, 00:09:59.660 { 00:09:59.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.660 "dma_device_type": 2 00:09:59.660 } 00:09:59.660 ], 00:09:59.660 "driver_specific": {} 00:09:59.660 } 00:09:59.660 ] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.660 BaseBdev3 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.660 [ 00:09:59.660 { 00:09:59.660 "name": "BaseBdev3", 00:09:59.660 "aliases": [ 00:09:59.660 "6a5e4a06-1270-43cc-9757-0e00f4d157e7" 00:09:59.660 ], 00:09:59.660 "product_name": "Malloc disk", 00:09:59.660 "block_size": 512, 00:09:59.660 "num_blocks": 65536, 00:09:59.660 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:09:59.660 "assigned_rate_limits": { 00:09:59.660 "rw_ios_per_sec": 0, 00:09:59.660 "rw_mbytes_per_sec": 0, 00:09:59.660 "r_mbytes_per_sec": 0, 00:09:59.660 "w_mbytes_per_sec": 0 00:09:59.660 }, 00:09:59.660 "claimed": false, 00:09:59.660 "zoned": false, 00:09:59.660 "supported_io_types": { 00:09:59.660 "read": true, 00:09:59.660 "write": true, 00:09:59.660 "unmap": true, 00:09:59.660 "flush": true, 00:09:59.660 "reset": true, 00:09:59.660 "nvme_admin": false, 00:09:59.660 "nvme_io": false, 00:09:59.660 "nvme_io_md": false, 00:09:59.660 "write_zeroes": true, 00:09:59.660 "zcopy": true, 00:09:59.660 "get_zone_info": false, 00:09:59.660 "zone_management": false, 00:09:59.660 "zone_append": false, 00:09:59.660 "compare": false, 00:09:59.660 "compare_and_write": false, 00:09:59.660 "abort": true, 00:09:59.660 "seek_hole": false, 00:09:59.660 "seek_data": false, 00:09:59.660 "copy": true, 00:09:59.660 "nvme_iov_md": false 00:09:59.660 }, 00:09:59.660 "memory_domains": [ 00:09:59.660 { 00:09:59.660 "dma_device_id": "system", 00:09:59.660 "dma_device_type": 1 00:09:59.660 }, 00:09:59.660 { 00:09:59.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.660 "dma_device_type": 2 00:09:59.660 } 00:09:59.660 ], 00:09:59.660 "driver_specific": {} 00:09:59.660 } 00:09:59.660 ] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.660 [2024-11-20 14:27:00.636038] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.660 [2024-11-20 14:27:00.636218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.660 [2024-11-20 14:27:00.636365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.660 [2024-11-20 14:27:00.639096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.660 "name": "Existed_Raid", 00:09:59.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.660 "strip_size_kb": 64, 00:09:59.660 "state": "configuring", 00:09:59.660 "raid_level": "concat", 00:09:59.660 "superblock": false, 00:09:59.660 "num_base_bdevs": 3, 00:09:59.660 "num_base_bdevs_discovered": 2, 00:09:59.660 "num_base_bdevs_operational": 3, 00:09:59.660 "base_bdevs_list": [ 00:09:59.660 { 00:09:59.660 "name": "BaseBdev1", 00:09:59.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.660 "is_configured": false, 00:09:59.660 "data_offset": 0, 00:09:59.660 "data_size": 0 00:09:59.660 }, 00:09:59.660 { 00:09:59.660 "name": "BaseBdev2", 00:09:59.660 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:09:59.660 "is_configured": true, 00:09:59.660 "data_offset": 0, 00:09:59.660 "data_size": 65536 00:09:59.660 }, 00:09:59.660 { 00:09:59.660 "name": "BaseBdev3", 00:09:59.660 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:09:59.660 "is_configured": true, 00:09:59.660 "data_offset": 0, 00:09:59.660 "data_size": 65536 00:09:59.660 } 00:09:59.660 ] 00:09:59.660 }' 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.660 14:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.228 [2024-11-20 14:27:01.184223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.228 "name": "Existed_Raid", 00:10:00.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.228 "strip_size_kb": 64, 00:10:00.228 "state": "configuring", 00:10:00.228 "raid_level": "concat", 00:10:00.228 "superblock": false, 00:10:00.228 "num_base_bdevs": 3, 00:10:00.228 "num_base_bdevs_discovered": 1, 00:10:00.228 "num_base_bdevs_operational": 3, 00:10:00.228 "base_bdevs_list": [ 00:10:00.228 { 00:10:00.228 "name": "BaseBdev1", 00:10:00.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.228 "is_configured": false, 00:10:00.228 "data_offset": 0, 00:10:00.228 "data_size": 0 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "name": null, 00:10:00.228 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:00.228 "is_configured": false, 00:10:00.228 "data_offset": 0, 00:10:00.228 "data_size": 65536 00:10:00.228 }, 00:10:00.228 { 00:10:00.228 "name": "BaseBdev3", 00:10:00.228 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:00.228 "is_configured": true, 00:10:00.228 "data_offset": 0, 00:10:00.228 "data_size": 65536 00:10:00.228 } 00:10:00.228 ] 00:10:00.228 }' 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.228 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.796 [2024-11-20 14:27:01.782730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.796 BaseBdev1 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.796 [ 00:10:00.796 { 00:10:00.796 "name": "BaseBdev1", 00:10:00.796 "aliases": [ 00:10:00.796 "bdcd6bc2-f73a-4966-a030-951dc390e650" 00:10:00.796 ], 00:10:00.796 "product_name": "Malloc disk", 00:10:00.796 "block_size": 512, 00:10:00.796 "num_blocks": 65536, 00:10:00.796 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:00.796 "assigned_rate_limits": { 00:10:00.796 "rw_ios_per_sec": 0, 00:10:00.796 "rw_mbytes_per_sec": 0, 00:10:00.796 "r_mbytes_per_sec": 0, 00:10:00.796 "w_mbytes_per_sec": 0 00:10:00.796 }, 00:10:00.796 "claimed": true, 00:10:00.796 "claim_type": "exclusive_write", 00:10:00.796 "zoned": false, 00:10:00.796 "supported_io_types": { 00:10:00.796 "read": true, 00:10:00.796 "write": true, 00:10:00.796 "unmap": true, 00:10:00.796 "flush": true, 00:10:00.796 "reset": true, 00:10:00.796 "nvme_admin": false, 00:10:00.796 "nvme_io": false, 00:10:00.796 "nvme_io_md": false, 00:10:00.796 "write_zeroes": true, 00:10:00.796 "zcopy": true, 00:10:00.796 "get_zone_info": false, 00:10:00.796 "zone_management": false, 00:10:00.796 "zone_append": false, 00:10:00.796 "compare": false, 00:10:00.796 "compare_and_write": false, 00:10:00.796 "abort": true, 00:10:00.796 "seek_hole": false, 00:10:00.796 "seek_data": false, 00:10:00.796 "copy": true, 00:10:00.796 "nvme_iov_md": false 00:10:00.796 }, 00:10:00.796 "memory_domains": [ 00:10:00.796 { 00:10:00.796 "dma_device_id": "system", 00:10:00.796 "dma_device_type": 1 00:10:00.796 }, 00:10:00.796 { 00:10:00.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.796 "dma_device_type": 2 00:10:00.796 } 00:10:00.796 ], 00:10:00.796 "driver_specific": {} 00:10:00.796 } 00:10:00.796 ] 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.796 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.055 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.055 "name": "Existed_Raid", 00:10:01.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.055 "strip_size_kb": 64, 00:10:01.055 "state": "configuring", 00:10:01.055 "raid_level": "concat", 00:10:01.055 "superblock": false, 00:10:01.055 "num_base_bdevs": 3, 00:10:01.055 "num_base_bdevs_discovered": 2, 00:10:01.055 "num_base_bdevs_operational": 3, 00:10:01.055 "base_bdevs_list": [ 00:10:01.055 { 00:10:01.055 "name": "BaseBdev1", 00:10:01.055 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:01.055 "is_configured": true, 00:10:01.055 "data_offset": 0, 00:10:01.055 "data_size": 65536 00:10:01.055 }, 00:10:01.055 { 00:10:01.055 "name": null, 00:10:01.055 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:01.055 "is_configured": false, 00:10:01.055 "data_offset": 0, 00:10:01.055 "data_size": 65536 00:10:01.055 }, 00:10:01.055 { 00:10:01.055 "name": "BaseBdev3", 00:10:01.055 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:01.055 "is_configured": true, 00:10:01.055 "data_offset": 0, 00:10:01.055 "data_size": 65536 00:10:01.055 } 00:10:01.055 ] 00:10:01.055 }' 00:10:01.055 14:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.055 14:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.313 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.313 [2024-11-20 14:27:02.366952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.572 "name": "Existed_Raid", 00:10:01.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.572 "strip_size_kb": 64, 00:10:01.572 "state": "configuring", 00:10:01.572 "raid_level": "concat", 00:10:01.572 "superblock": false, 00:10:01.572 "num_base_bdevs": 3, 00:10:01.572 "num_base_bdevs_discovered": 1, 00:10:01.572 "num_base_bdevs_operational": 3, 00:10:01.572 "base_bdevs_list": [ 00:10:01.572 { 00:10:01.572 "name": "BaseBdev1", 00:10:01.572 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:01.572 "is_configured": true, 00:10:01.572 "data_offset": 0, 00:10:01.572 "data_size": 65536 00:10:01.572 }, 00:10:01.572 { 00:10:01.572 "name": null, 00:10:01.572 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:01.572 "is_configured": false, 00:10:01.572 "data_offset": 0, 00:10:01.572 "data_size": 65536 00:10:01.572 }, 00:10:01.572 { 00:10:01.572 "name": null, 00:10:01.572 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:01.572 "is_configured": false, 00:10:01.572 "data_offset": 0, 00:10:01.572 "data_size": 65536 00:10:01.572 } 00:10:01.572 ] 00:10:01.572 }' 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.572 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.831 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.831 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.831 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.831 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.091 [2024-11-20 14:27:02.923161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.091 "name": "Existed_Raid", 00:10:02.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.091 "strip_size_kb": 64, 00:10:02.091 "state": "configuring", 00:10:02.091 "raid_level": "concat", 00:10:02.091 "superblock": false, 00:10:02.091 "num_base_bdevs": 3, 00:10:02.091 "num_base_bdevs_discovered": 2, 00:10:02.091 "num_base_bdevs_operational": 3, 00:10:02.091 "base_bdevs_list": [ 00:10:02.091 { 00:10:02.091 "name": "BaseBdev1", 00:10:02.091 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:02.091 "is_configured": true, 00:10:02.091 "data_offset": 0, 00:10:02.091 "data_size": 65536 00:10:02.091 }, 00:10:02.091 { 00:10:02.091 "name": null, 00:10:02.091 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:02.091 "is_configured": false, 00:10:02.091 "data_offset": 0, 00:10:02.091 "data_size": 65536 00:10:02.091 }, 00:10:02.091 { 00:10:02.091 "name": "BaseBdev3", 00:10:02.091 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:02.091 "is_configured": true, 00:10:02.091 "data_offset": 0, 00:10:02.091 "data_size": 65536 00:10:02.091 } 00:10:02.091 ] 00:10:02.091 }' 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.091 14:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.659 [2024-11-20 14:27:03.483328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.659 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.660 "name": "Existed_Raid", 00:10:02.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.660 "strip_size_kb": 64, 00:10:02.660 "state": "configuring", 00:10:02.660 "raid_level": "concat", 00:10:02.660 "superblock": false, 00:10:02.660 "num_base_bdevs": 3, 00:10:02.660 "num_base_bdevs_discovered": 1, 00:10:02.660 "num_base_bdevs_operational": 3, 00:10:02.660 "base_bdevs_list": [ 00:10:02.660 { 00:10:02.660 "name": null, 00:10:02.660 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:02.660 "is_configured": false, 00:10:02.660 "data_offset": 0, 00:10:02.660 "data_size": 65536 00:10:02.660 }, 00:10:02.660 { 00:10:02.660 "name": null, 00:10:02.660 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:02.660 "is_configured": false, 00:10:02.660 "data_offset": 0, 00:10:02.660 "data_size": 65536 00:10:02.660 }, 00:10:02.660 { 00:10:02.660 "name": "BaseBdev3", 00:10:02.660 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:02.660 "is_configured": true, 00:10:02.660 "data_offset": 0, 00:10:02.660 "data_size": 65536 00:10:02.660 } 00:10:02.660 ] 00:10:02.660 }' 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.660 14:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.227 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.227 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.228 [2024-11-20 14:27:04.130662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.228 "name": "Existed_Raid", 00:10:03.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.228 "strip_size_kb": 64, 00:10:03.228 "state": "configuring", 00:10:03.228 "raid_level": "concat", 00:10:03.228 "superblock": false, 00:10:03.228 "num_base_bdevs": 3, 00:10:03.228 "num_base_bdevs_discovered": 2, 00:10:03.228 "num_base_bdevs_operational": 3, 00:10:03.228 "base_bdevs_list": [ 00:10:03.228 { 00:10:03.228 "name": null, 00:10:03.228 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:03.228 "is_configured": false, 00:10:03.228 "data_offset": 0, 00:10:03.228 "data_size": 65536 00:10:03.228 }, 00:10:03.228 { 00:10:03.228 "name": "BaseBdev2", 00:10:03.228 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:03.228 "is_configured": true, 00:10:03.228 "data_offset": 0, 00:10:03.228 "data_size": 65536 00:10:03.228 }, 00:10:03.228 { 00:10:03.228 "name": "BaseBdev3", 00:10:03.228 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:03.228 "is_configured": true, 00:10:03.228 "data_offset": 0, 00:10:03.228 "data_size": 65536 00:10:03.228 } 00:10:03.228 ] 00:10:03.228 }' 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.228 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bdcd6bc2-f73a-4966-a030-951dc390e650 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 [2024-11-20 14:27:04.773478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.795 [2024-11-20 14:27:04.773542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:03.795 [2024-11-20 14:27:04.773564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:03.795 [2024-11-20 14:27:04.773994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.795 [2024-11-20 14:27:04.774192] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:03.795 [2024-11-20 14:27:04.774215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:03.795 [2024-11-20 14:27:04.774524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.795 NewBaseBdev 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.795 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.796 [ 00:10:03.796 { 00:10:03.796 "name": "NewBaseBdev", 00:10:03.796 "aliases": [ 00:10:03.796 "bdcd6bc2-f73a-4966-a030-951dc390e650" 00:10:03.796 ], 00:10:03.796 "product_name": "Malloc disk", 00:10:03.796 "block_size": 512, 00:10:03.796 "num_blocks": 65536, 00:10:03.796 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:03.796 "assigned_rate_limits": { 00:10:03.796 "rw_ios_per_sec": 0, 00:10:03.796 "rw_mbytes_per_sec": 0, 00:10:03.796 "r_mbytes_per_sec": 0, 00:10:03.796 "w_mbytes_per_sec": 0 00:10:03.796 }, 00:10:03.796 "claimed": true, 00:10:03.796 "claim_type": "exclusive_write", 00:10:03.796 "zoned": false, 00:10:03.796 "supported_io_types": { 00:10:03.796 "read": true, 00:10:03.796 "write": true, 00:10:03.796 "unmap": true, 00:10:03.796 "flush": true, 00:10:03.796 "reset": true, 00:10:03.796 "nvme_admin": false, 00:10:03.796 "nvme_io": false, 00:10:03.796 "nvme_io_md": false, 00:10:03.796 "write_zeroes": true, 00:10:03.796 "zcopy": true, 00:10:03.796 "get_zone_info": false, 00:10:03.796 "zone_management": false, 00:10:03.796 "zone_append": false, 00:10:03.796 "compare": false, 00:10:03.796 "compare_and_write": false, 00:10:03.796 "abort": true, 00:10:03.796 "seek_hole": false, 00:10:03.796 "seek_data": false, 00:10:03.796 "copy": true, 00:10:03.796 "nvme_iov_md": false 00:10:03.796 }, 00:10:03.796 "memory_domains": [ 00:10:03.796 { 00:10:03.796 "dma_device_id": "system", 00:10:03.796 "dma_device_type": 1 00:10:03.796 }, 00:10:03.796 { 00:10:03.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.796 "dma_device_type": 2 00:10:03.796 } 00:10:03.796 ], 00:10:03.796 "driver_specific": {} 00:10:03.796 } 00:10:03.796 ] 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.796 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.078 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.078 "name": "Existed_Raid", 00:10:04.078 "uuid": "22a8ec03-a892-4f28-9e99-edc226e253ad", 00:10:04.078 "strip_size_kb": 64, 00:10:04.078 "state": "online", 00:10:04.078 "raid_level": "concat", 00:10:04.078 "superblock": false, 00:10:04.078 "num_base_bdevs": 3, 00:10:04.078 "num_base_bdevs_discovered": 3, 00:10:04.078 "num_base_bdevs_operational": 3, 00:10:04.078 "base_bdevs_list": [ 00:10:04.078 { 00:10:04.078 "name": "NewBaseBdev", 00:10:04.078 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:04.078 "is_configured": true, 00:10:04.078 "data_offset": 0, 00:10:04.078 "data_size": 65536 00:10:04.078 }, 00:10:04.078 { 00:10:04.078 "name": "BaseBdev2", 00:10:04.078 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:04.078 "is_configured": true, 00:10:04.078 "data_offset": 0, 00:10:04.078 "data_size": 65536 00:10:04.078 }, 00:10:04.078 { 00:10:04.078 "name": "BaseBdev3", 00:10:04.078 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:04.078 "is_configured": true, 00:10:04.078 "data_offset": 0, 00:10:04.078 "data_size": 65536 00:10:04.078 } 00:10:04.078 ] 00:10:04.078 }' 00:10:04.078 14:27:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.078 14:27:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.355 [2024-11-20 14:27:05.346100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.355 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.355 "name": "Existed_Raid", 00:10:04.355 "aliases": [ 00:10:04.355 "22a8ec03-a892-4f28-9e99-edc226e253ad" 00:10:04.355 ], 00:10:04.355 "product_name": "Raid Volume", 00:10:04.355 "block_size": 512, 00:10:04.355 "num_blocks": 196608, 00:10:04.355 "uuid": "22a8ec03-a892-4f28-9e99-edc226e253ad", 00:10:04.355 "assigned_rate_limits": { 00:10:04.355 "rw_ios_per_sec": 0, 00:10:04.355 "rw_mbytes_per_sec": 0, 00:10:04.355 "r_mbytes_per_sec": 0, 00:10:04.355 "w_mbytes_per_sec": 0 00:10:04.355 }, 00:10:04.355 "claimed": false, 00:10:04.355 "zoned": false, 00:10:04.355 "supported_io_types": { 00:10:04.355 "read": true, 00:10:04.355 "write": true, 00:10:04.355 "unmap": true, 00:10:04.355 "flush": true, 00:10:04.355 "reset": true, 00:10:04.355 "nvme_admin": false, 00:10:04.355 "nvme_io": false, 00:10:04.355 "nvme_io_md": false, 00:10:04.355 "write_zeroes": true, 00:10:04.355 "zcopy": false, 00:10:04.355 "get_zone_info": false, 00:10:04.355 "zone_management": false, 00:10:04.355 "zone_append": false, 00:10:04.355 "compare": false, 00:10:04.355 "compare_and_write": false, 00:10:04.356 "abort": false, 00:10:04.356 "seek_hole": false, 00:10:04.356 "seek_data": false, 00:10:04.356 "copy": false, 00:10:04.356 "nvme_iov_md": false 00:10:04.356 }, 00:10:04.356 "memory_domains": [ 00:10:04.356 { 00:10:04.356 "dma_device_id": "system", 00:10:04.356 "dma_device_type": 1 00:10:04.356 }, 00:10:04.356 { 00:10:04.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.356 "dma_device_type": 2 00:10:04.356 }, 00:10:04.356 { 00:10:04.356 "dma_device_id": "system", 00:10:04.356 "dma_device_type": 1 00:10:04.356 }, 00:10:04.356 { 00:10:04.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.356 "dma_device_type": 2 00:10:04.356 }, 00:10:04.356 { 00:10:04.356 "dma_device_id": "system", 00:10:04.356 "dma_device_type": 1 00:10:04.356 }, 00:10:04.356 { 00:10:04.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.356 "dma_device_type": 2 00:10:04.356 } 00:10:04.356 ], 00:10:04.356 "driver_specific": { 00:10:04.356 "raid": { 00:10:04.356 "uuid": "22a8ec03-a892-4f28-9e99-edc226e253ad", 00:10:04.356 "strip_size_kb": 64, 00:10:04.356 "state": "online", 00:10:04.356 "raid_level": "concat", 00:10:04.356 "superblock": false, 00:10:04.356 "num_base_bdevs": 3, 00:10:04.356 "num_base_bdevs_discovered": 3, 00:10:04.356 "num_base_bdevs_operational": 3, 00:10:04.356 "base_bdevs_list": [ 00:10:04.356 { 00:10:04.356 "name": "NewBaseBdev", 00:10:04.356 "uuid": "bdcd6bc2-f73a-4966-a030-951dc390e650", 00:10:04.356 "is_configured": true, 00:10:04.356 "data_offset": 0, 00:10:04.356 "data_size": 65536 00:10:04.356 }, 00:10:04.356 { 00:10:04.356 "name": "BaseBdev2", 00:10:04.356 "uuid": "ed40ac50-d3c3-4f8c-a59f-c19fb861fd6b", 00:10:04.356 "is_configured": true, 00:10:04.356 "data_offset": 0, 00:10:04.356 "data_size": 65536 00:10:04.356 }, 00:10:04.356 { 00:10:04.356 "name": "BaseBdev3", 00:10:04.356 "uuid": "6a5e4a06-1270-43cc-9757-0e00f4d157e7", 00:10:04.356 "is_configured": true, 00:10:04.356 "data_offset": 0, 00:10:04.356 "data_size": 65536 00:10:04.356 } 00:10:04.356 ] 00:10:04.356 } 00:10:04.356 } 00:10:04.356 }' 00:10:04.356 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.615 BaseBdev2 00:10:04.615 BaseBdev3' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.615 [2024-11-20 14:27:05.649761] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.615 [2024-11-20 14:27:05.649800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.615 [2024-11-20 14:27:05.649900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.615 [2024-11-20 14:27:05.649980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.615 [2024-11-20 14:27:05.650001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65705 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65705 ']' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65705 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.615 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65705 00:10:04.874 killing process with pid 65705 00:10:04.874 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.874 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.874 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65705' 00:10:04.874 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65705 00:10:04.874 [2024-11-20 14:27:05.686994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.874 14:27:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65705 00:10:05.132 [2024-11-20 14:27:05.956215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:06.067 00:10:06.067 real 0m11.997s 00:10:06.067 user 0m19.795s 00:10:06.067 sys 0m1.712s 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.067 ************************************ 00:10:06.067 END TEST raid_state_function_test 00:10:06.067 ************************************ 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.067 14:27:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:06.067 14:27:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.067 14:27:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.067 14:27:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.067 ************************************ 00:10:06.067 START TEST raid_state_function_test_sb 00:10:06.067 ************************************ 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.067 Process raid pid: 66343 00:10:06.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:06.067 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66343 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66343' 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66343 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66343 ']' 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.068 14:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.327 [2024-11-20 14:27:07.204087] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:10:06.327 [2024-11-20 14:27:07.204519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.584 [2024-11-20 14:27:07.389041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.584 [2024-11-20 14:27:07.551259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.842 [2024-11-20 14:27:07.770729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.842 [2024-11-20 14:27:07.771029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.409 [2024-11-20 14:27:08.287266] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.409 [2024-11-20 14:27:08.287482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.409 [2024-11-20 14:27:08.287655] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.409 [2024-11-20 14:27:08.287833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.409 [2024-11-20 14:27:08.287948] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.409 [2024-11-20 14:27:08.288009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.409 "name": "Existed_Raid", 00:10:07.409 "uuid": "67f081fd-c565-41bd-8424-6d357e193f2b", 00:10:07.409 "strip_size_kb": 64, 00:10:07.409 "state": "configuring", 00:10:07.409 "raid_level": "concat", 00:10:07.409 "superblock": true, 00:10:07.409 "num_base_bdevs": 3, 00:10:07.409 "num_base_bdevs_discovered": 0, 00:10:07.409 "num_base_bdevs_operational": 3, 00:10:07.409 "base_bdevs_list": [ 00:10:07.409 { 00:10:07.409 "name": "BaseBdev1", 00:10:07.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.409 "is_configured": false, 00:10:07.409 "data_offset": 0, 00:10:07.409 "data_size": 0 00:10:07.409 }, 00:10:07.409 { 00:10:07.409 "name": "BaseBdev2", 00:10:07.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.409 "is_configured": false, 00:10:07.409 "data_offset": 0, 00:10:07.409 "data_size": 0 00:10:07.409 }, 00:10:07.409 { 00:10:07.409 "name": "BaseBdev3", 00:10:07.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.409 "is_configured": false, 00:10:07.409 "data_offset": 0, 00:10:07.409 "data_size": 0 00:10:07.409 } 00:10:07.409 ] 00:10:07.409 }' 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.409 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 [2024-11-20 14:27:08.823356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.976 [2024-11-20 14:27:08.823545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 [2024-11-20 14:27:08.831333] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.976 [2024-11-20 14:27:08.831401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.976 [2024-11-20 14:27:08.831419] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.976 [2024-11-20 14:27:08.831435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.976 [2024-11-20 14:27:08.831445] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.976 [2024-11-20 14:27:08.831460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 [2024-11-20 14:27:08.877476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.976 BaseBdev1 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.976 [ 00:10:07.976 { 00:10:07.976 "name": "BaseBdev1", 00:10:07.976 "aliases": [ 00:10:07.976 "cdf40629-711b-4100-83e9-fc92cbf4e044" 00:10:07.976 ], 00:10:07.976 "product_name": "Malloc disk", 00:10:07.976 "block_size": 512, 00:10:07.976 "num_blocks": 65536, 00:10:07.976 "uuid": "cdf40629-711b-4100-83e9-fc92cbf4e044", 00:10:07.976 "assigned_rate_limits": { 00:10:07.976 "rw_ios_per_sec": 0, 00:10:07.976 "rw_mbytes_per_sec": 0, 00:10:07.976 "r_mbytes_per_sec": 0, 00:10:07.976 "w_mbytes_per_sec": 0 00:10:07.976 }, 00:10:07.976 "claimed": true, 00:10:07.976 "claim_type": "exclusive_write", 00:10:07.976 "zoned": false, 00:10:07.976 "supported_io_types": { 00:10:07.976 "read": true, 00:10:07.976 "write": true, 00:10:07.976 "unmap": true, 00:10:07.976 "flush": true, 00:10:07.976 "reset": true, 00:10:07.976 "nvme_admin": false, 00:10:07.976 "nvme_io": false, 00:10:07.976 "nvme_io_md": false, 00:10:07.976 "write_zeroes": true, 00:10:07.976 "zcopy": true, 00:10:07.976 "get_zone_info": false, 00:10:07.976 "zone_management": false, 00:10:07.976 "zone_append": false, 00:10:07.976 "compare": false, 00:10:07.976 "compare_and_write": false, 00:10:07.976 "abort": true, 00:10:07.976 "seek_hole": false, 00:10:07.976 "seek_data": false, 00:10:07.976 "copy": true, 00:10:07.976 "nvme_iov_md": false 00:10:07.976 }, 00:10:07.976 "memory_domains": [ 00:10:07.976 { 00:10:07.976 "dma_device_id": "system", 00:10:07.976 "dma_device_type": 1 00:10:07.976 }, 00:10:07.976 { 00:10:07.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.976 "dma_device_type": 2 00:10:07.976 } 00:10:07.976 ], 00:10:07.976 "driver_specific": {} 00:10:07.976 } 00:10:07.976 ] 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.976 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.977 "name": "Existed_Raid", 00:10:07.977 "uuid": "fc5a2152-715f-4421-97f5-3ec097c090a3", 00:10:07.977 "strip_size_kb": 64, 00:10:07.977 "state": "configuring", 00:10:07.977 "raid_level": "concat", 00:10:07.977 "superblock": true, 00:10:07.977 "num_base_bdevs": 3, 00:10:07.977 "num_base_bdevs_discovered": 1, 00:10:07.977 "num_base_bdevs_operational": 3, 00:10:07.977 "base_bdevs_list": [ 00:10:07.977 { 00:10:07.977 "name": "BaseBdev1", 00:10:07.977 "uuid": "cdf40629-711b-4100-83e9-fc92cbf4e044", 00:10:07.977 "is_configured": true, 00:10:07.977 "data_offset": 2048, 00:10:07.977 "data_size": 63488 00:10:07.977 }, 00:10:07.977 { 00:10:07.977 "name": "BaseBdev2", 00:10:07.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.977 "is_configured": false, 00:10:07.977 "data_offset": 0, 00:10:07.977 "data_size": 0 00:10:07.977 }, 00:10:07.977 { 00:10:07.977 "name": "BaseBdev3", 00:10:07.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.977 "is_configured": false, 00:10:07.977 "data_offset": 0, 00:10:07.977 "data_size": 0 00:10:07.977 } 00:10:07.977 ] 00:10:07.977 }' 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.977 14:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 [2024-11-20 14:27:09.425762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.544 [2024-11-20 14:27:09.425832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 [2024-11-20 14:27:09.433766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.544 [2024-11-20 14:27:09.436275] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.544 [2024-11-20 14:27:09.436329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.544 [2024-11-20 14:27:09.436346] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.544 [2024-11-20 14:27:09.436362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.544 "name": "Existed_Raid", 00:10:08.544 "uuid": "252b4171-adf9-42b3-93e3-aca33370f653", 00:10:08.544 "strip_size_kb": 64, 00:10:08.544 "state": "configuring", 00:10:08.544 "raid_level": "concat", 00:10:08.544 "superblock": true, 00:10:08.544 "num_base_bdevs": 3, 00:10:08.544 "num_base_bdevs_discovered": 1, 00:10:08.544 "num_base_bdevs_operational": 3, 00:10:08.544 "base_bdevs_list": [ 00:10:08.544 { 00:10:08.544 "name": "BaseBdev1", 00:10:08.544 "uuid": "cdf40629-711b-4100-83e9-fc92cbf4e044", 00:10:08.544 "is_configured": true, 00:10:08.544 "data_offset": 2048, 00:10:08.544 "data_size": 63488 00:10:08.544 }, 00:10:08.544 { 00:10:08.544 "name": "BaseBdev2", 00:10:08.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.544 "is_configured": false, 00:10:08.544 "data_offset": 0, 00:10:08.544 "data_size": 0 00:10:08.544 }, 00:10:08.544 { 00:10:08.544 "name": "BaseBdev3", 00:10:08.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.544 "is_configured": false, 00:10:08.544 "data_offset": 0, 00:10:08.544 "data_size": 0 00:10:08.544 } 00:10:08.544 ] 00:10:08.544 }' 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.544 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.110 [2024-11-20 14:27:09.989162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.110 BaseBdev2 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.110 14:27:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.110 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.110 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:09.110 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.110 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.110 [ 00:10:09.110 { 00:10:09.110 "name": "BaseBdev2", 00:10:09.110 "aliases": [ 00:10:09.110 "b3143737-1c98-447a-8786-7a19742bbb7d" 00:10:09.110 ], 00:10:09.110 "product_name": "Malloc disk", 00:10:09.110 "block_size": 512, 00:10:09.110 "num_blocks": 65536, 00:10:09.110 "uuid": "b3143737-1c98-447a-8786-7a19742bbb7d", 00:10:09.110 "assigned_rate_limits": { 00:10:09.110 "rw_ios_per_sec": 0, 00:10:09.110 "rw_mbytes_per_sec": 0, 00:10:09.110 "r_mbytes_per_sec": 0, 00:10:09.110 "w_mbytes_per_sec": 0 00:10:09.110 }, 00:10:09.110 "claimed": true, 00:10:09.110 "claim_type": "exclusive_write", 00:10:09.110 "zoned": false, 00:10:09.110 "supported_io_types": { 00:10:09.110 "read": true, 00:10:09.110 "write": true, 00:10:09.110 "unmap": true, 00:10:09.110 "flush": true, 00:10:09.110 "reset": true, 00:10:09.110 "nvme_admin": false, 00:10:09.110 "nvme_io": false, 00:10:09.110 "nvme_io_md": false, 00:10:09.111 "write_zeroes": true, 00:10:09.111 "zcopy": true, 00:10:09.111 "get_zone_info": false, 00:10:09.111 "zone_management": false, 00:10:09.111 "zone_append": false, 00:10:09.111 "compare": false, 00:10:09.111 "compare_and_write": false, 00:10:09.111 "abort": true, 00:10:09.111 "seek_hole": false, 00:10:09.111 "seek_data": false, 00:10:09.111 "copy": true, 00:10:09.111 "nvme_iov_md": false 00:10:09.111 }, 00:10:09.111 "memory_domains": [ 00:10:09.111 { 00:10:09.111 "dma_device_id": "system", 00:10:09.111 "dma_device_type": 1 00:10:09.111 }, 00:10:09.111 { 00:10:09.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.111 "dma_device_type": 2 00:10:09.111 } 00:10:09.111 ], 00:10:09.111 "driver_specific": {} 00:10:09.111 } 00:10:09.111 ] 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.111 "name": "Existed_Raid", 00:10:09.111 "uuid": "252b4171-adf9-42b3-93e3-aca33370f653", 00:10:09.111 "strip_size_kb": 64, 00:10:09.111 "state": "configuring", 00:10:09.111 "raid_level": "concat", 00:10:09.111 "superblock": true, 00:10:09.111 "num_base_bdevs": 3, 00:10:09.111 "num_base_bdevs_discovered": 2, 00:10:09.111 "num_base_bdevs_operational": 3, 00:10:09.111 "base_bdevs_list": [ 00:10:09.111 { 00:10:09.111 "name": "BaseBdev1", 00:10:09.111 "uuid": "cdf40629-711b-4100-83e9-fc92cbf4e044", 00:10:09.111 "is_configured": true, 00:10:09.111 "data_offset": 2048, 00:10:09.111 "data_size": 63488 00:10:09.111 }, 00:10:09.111 { 00:10:09.111 "name": "BaseBdev2", 00:10:09.111 "uuid": "b3143737-1c98-447a-8786-7a19742bbb7d", 00:10:09.111 "is_configured": true, 00:10:09.111 "data_offset": 2048, 00:10:09.111 "data_size": 63488 00:10:09.111 }, 00:10:09.111 { 00:10:09.111 "name": "BaseBdev3", 00:10:09.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.111 "is_configured": false, 00:10:09.111 "data_offset": 0, 00:10:09.111 "data_size": 0 00:10:09.111 } 00:10:09.111 ] 00:10:09.111 }' 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.111 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.677 [2024-11-20 14:27:10.623811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.677 [2024-11-20 14:27:10.624315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:09.677 [2024-11-20 14:27:10.624354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:09.677 BaseBdev3 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.677 [2024-11-20 14:27:10.625179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.677 [2024-11-20 14:27:10.625491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:09.677 [2024-11-20 14:27:10.625515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.677 [2024-11-20 14:27:10.625835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.677 [ 00:10:09.677 { 00:10:09.677 "name": "BaseBdev3", 00:10:09.677 "aliases": [ 00:10:09.677 "7d9202e8-7e5f-4989-9eca-c3347713fe63" 00:10:09.677 ], 00:10:09.677 "product_name": "Malloc disk", 00:10:09.677 "block_size": 512, 00:10:09.677 "num_blocks": 65536, 00:10:09.677 "uuid": "7d9202e8-7e5f-4989-9eca-c3347713fe63", 00:10:09.677 "assigned_rate_limits": { 00:10:09.677 "rw_ios_per_sec": 0, 00:10:09.677 "rw_mbytes_per_sec": 0, 00:10:09.677 "r_mbytes_per_sec": 0, 00:10:09.677 "w_mbytes_per_sec": 0 00:10:09.677 }, 00:10:09.677 "claimed": true, 00:10:09.677 "claim_type": "exclusive_write", 00:10:09.677 "zoned": false, 00:10:09.677 "supported_io_types": { 00:10:09.677 "read": true, 00:10:09.677 "write": true, 00:10:09.677 "unmap": true, 00:10:09.677 "flush": true, 00:10:09.677 "reset": true, 00:10:09.677 "nvme_admin": false, 00:10:09.677 "nvme_io": false, 00:10:09.677 "nvme_io_md": false, 00:10:09.677 "write_zeroes": true, 00:10:09.677 "zcopy": true, 00:10:09.677 "get_zone_info": false, 00:10:09.677 "zone_management": false, 00:10:09.677 "zone_append": false, 00:10:09.677 "compare": false, 00:10:09.677 "compare_and_write": false, 00:10:09.677 "abort": true, 00:10:09.677 "seek_hole": false, 00:10:09.677 "seek_data": false, 00:10:09.677 "copy": true, 00:10:09.677 "nvme_iov_md": false 00:10:09.677 }, 00:10:09.677 "memory_domains": [ 00:10:09.677 { 00:10:09.677 "dma_device_id": "system", 00:10:09.677 "dma_device_type": 1 00:10:09.677 }, 00:10:09.677 { 00:10:09.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.677 "dma_device_type": 2 00:10:09.677 } 00:10:09.677 ], 00:10:09.677 "driver_specific": {} 00:10:09.677 } 00:10:09.677 ] 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.677 "name": "Existed_Raid", 00:10:09.677 "uuid": "252b4171-adf9-42b3-93e3-aca33370f653", 00:10:09.677 "strip_size_kb": 64, 00:10:09.677 "state": "online", 00:10:09.677 "raid_level": "concat", 00:10:09.677 "superblock": true, 00:10:09.677 "num_base_bdevs": 3, 00:10:09.677 "num_base_bdevs_discovered": 3, 00:10:09.677 "num_base_bdevs_operational": 3, 00:10:09.677 "base_bdevs_list": [ 00:10:09.677 { 00:10:09.677 "name": "BaseBdev1", 00:10:09.677 "uuid": "cdf40629-711b-4100-83e9-fc92cbf4e044", 00:10:09.677 "is_configured": true, 00:10:09.677 "data_offset": 2048, 00:10:09.677 "data_size": 63488 00:10:09.677 }, 00:10:09.677 { 00:10:09.677 "name": "BaseBdev2", 00:10:09.677 "uuid": "b3143737-1c98-447a-8786-7a19742bbb7d", 00:10:09.677 "is_configured": true, 00:10:09.677 "data_offset": 2048, 00:10:09.677 "data_size": 63488 00:10:09.677 }, 00:10:09.677 { 00:10:09.677 "name": "BaseBdev3", 00:10:09.677 "uuid": "7d9202e8-7e5f-4989-9eca-c3347713fe63", 00:10:09.677 "is_configured": true, 00:10:09.677 "data_offset": 2048, 00:10:09.677 "data_size": 63488 00:10:09.677 } 00:10:09.677 ] 00:10:09.677 }' 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.677 14:27:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.244 [2024-11-20 14:27:11.188677] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.244 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.244 "name": "Existed_Raid", 00:10:10.244 "aliases": [ 00:10:10.244 "252b4171-adf9-42b3-93e3-aca33370f653" 00:10:10.244 ], 00:10:10.244 "product_name": "Raid Volume", 00:10:10.244 "block_size": 512, 00:10:10.244 "num_blocks": 190464, 00:10:10.244 "uuid": "252b4171-adf9-42b3-93e3-aca33370f653", 00:10:10.244 "assigned_rate_limits": { 00:10:10.244 "rw_ios_per_sec": 0, 00:10:10.244 "rw_mbytes_per_sec": 0, 00:10:10.244 "r_mbytes_per_sec": 0, 00:10:10.244 "w_mbytes_per_sec": 0 00:10:10.244 }, 00:10:10.244 "claimed": false, 00:10:10.244 "zoned": false, 00:10:10.244 "supported_io_types": { 00:10:10.244 "read": true, 00:10:10.244 "write": true, 00:10:10.244 "unmap": true, 00:10:10.244 "flush": true, 00:10:10.244 "reset": true, 00:10:10.244 "nvme_admin": false, 00:10:10.244 "nvme_io": false, 00:10:10.244 "nvme_io_md": false, 00:10:10.244 "write_zeroes": true, 00:10:10.244 "zcopy": false, 00:10:10.244 "get_zone_info": false, 00:10:10.244 "zone_management": false, 00:10:10.244 "zone_append": false, 00:10:10.244 "compare": false, 00:10:10.244 "compare_and_write": false, 00:10:10.244 "abort": false, 00:10:10.244 "seek_hole": false, 00:10:10.244 "seek_data": false, 00:10:10.244 "copy": false, 00:10:10.244 "nvme_iov_md": false 00:10:10.244 }, 00:10:10.244 "memory_domains": [ 00:10:10.244 { 00:10:10.244 "dma_device_id": "system", 00:10:10.244 "dma_device_type": 1 00:10:10.244 }, 00:10:10.244 { 00:10:10.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.244 "dma_device_type": 2 00:10:10.244 }, 00:10:10.244 { 00:10:10.244 "dma_device_id": "system", 00:10:10.244 "dma_device_type": 1 00:10:10.244 }, 00:10:10.244 { 00:10:10.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.244 "dma_device_type": 2 00:10:10.244 }, 00:10:10.244 { 00:10:10.244 "dma_device_id": "system", 00:10:10.244 "dma_device_type": 1 00:10:10.244 }, 00:10:10.244 { 00:10:10.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.244 "dma_device_type": 2 00:10:10.244 } 00:10:10.244 ], 00:10:10.244 "driver_specific": { 00:10:10.244 "raid": { 00:10:10.244 "uuid": "252b4171-adf9-42b3-93e3-aca33370f653", 00:10:10.244 "strip_size_kb": 64, 00:10:10.244 "state": "online", 00:10:10.244 "raid_level": "concat", 00:10:10.244 "superblock": true, 00:10:10.244 "num_base_bdevs": 3, 00:10:10.244 "num_base_bdevs_discovered": 3, 00:10:10.244 "num_base_bdevs_operational": 3, 00:10:10.245 "base_bdevs_list": [ 00:10:10.245 { 00:10:10.245 "name": "BaseBdev1", 00:10:10.245 "uuid": "cdf40629-711b-4100-83e9-fc92cbf4e044", 00:10:10.245 "is_configured": true, 00:10:10.245 "data_offset": 2048, 00:10:10.245 "data_size": 63488 00:10:10.245 }, 00:10:10.245 { 00:10:10.245 "name": "BaseBdev2", 00:10:10.245 "uuid": "b3143737-1c98-447a-8786-7a19742bbb7d", 00:10:10.245 "is_configured": true, 00:10:10.245 "data_offset": 2048, 00:10:10.245 "data_size": 63488 00:10:10.245 }, 00:10:10.245 { 00:10:10.245 "name": "BaseBdev3", 00:10:10.245 "uuid": "7d9202e8-7e5f-4989-9eca-c3347713fe63", 00:10:10.245 "is_configured": true, 00:10:10.245 "data_offset": 2048, 00:10:10.245 "data_size": 63488 00:10:10.245 } 00:10:10.245 ] 00:10:10.245 } 00:10:10.245 } 00:10:10.245 }' 00:10:10.245 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.245 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:10.245 BaseBdev2 00:10:10.245 BaseBdev3' 00:10:10.245 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.503 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.503 [2024-11-20 14:27:11.504313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:10.503 [2024-11-20 14:27:11.504367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.503 [2024-11-20 14:27:11.504454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.762 "name": "Existed_Raid", 00:10:10.762 "uuid": "252b4171-adf9-42b3-93e3-aca33370f653", 00:10:10.762 "strip_size_kb": 64, 00:10:10.762 "state": "offline", 00:10:10.762 "raid_level": "concat", 00:10:10.762 "superblock": true, 00:10:10.762 "num_base_bdevs": 3, 00:10:10.762 "num_base_bdevs_discovered": 2, 00:10:10.762 "num_base_bdevs_operational": 2, 00:10:10.762 "base_bdevs_list": [ 00:10:10.762 { 00:10:10.762 "name": null, 00:10:10.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.762 "is_configured": false, 00:10:10.762 "data_offset": 0, 00:10:10.762 "data_size": 63488 00:10:10.762 }, 00:10:10.762 { 00:10:10.762 "name": "BaseBdev2", 00:10:10.762 "uuid": "b3143737-1c98-447a-8786-7a19742bbb7d", 00:10:10.762 "is_configured": true, 00:10:10.762 "data_offset": 2048, 00:10:10.762 "data_size": 63488 00:10:10.762 }, 00:10:10.762 { 00:10:10.762 "name": "BaseBdev3", 00:10:10.762 "uuid": "7d9202e8-7e5f-4989-9eca-c3347713fe63", 00:10:10.762 "is_configured": true, 00:10:10.762 "data_offset": 2048, 00:10:10.762 "data_size": 63488 00:10:10.762 } 00:10:10.762 ] 00:10:10.762 }' 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.762 14:27:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.328 [2024-11-20 14:27:12.166444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.328 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.328 [2024-11-20 14:27:12.310732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.328 [2024-11-20 14:27:12.310803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.587 BaseBdev2 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.587 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.588 [ 00:10:11.588 { 00:10:11.588 "name": "BaseBdev2", 00:10:11.588 "aliases": [ 00:10:11.588 "ff7fb987-3c67-4b7e-978c-8acee06cb9c0" 00:10:11.588 ], 00:10:11.588 "product_name": "Malloc disk", 00:10:11.588 "block_size": 512, 00:10:11.588 "num_blocks": 65536, 00:10:11.588 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:11.588 "assigned_rate_limits": { 00:10:11.588 "rw_ios_per_sec": 0, 00:10:11.588 "rw_mbytes_per_sec": 0, 00:10:11.588 "r_mbytes_per_sec": 0, 00:10:11.588 "w_mbytes_per_sec": 0 00:10:11.588 }, 00:10:11.588 "claimed": false, 00:10:11.588 "zoned": false, 00:10:11.588 "supported_io_types": { 00:10:11.588 "read": true, 00:10:11.588 "write": true, 00:10:11.588 "unmap": true, 00:10:11.588 "flush": true, 00:10:11.588 "reset": true, 00:10:11.588 "nvme_admin": false, 00:10:11.588 "nvme_io": false, 00:10:11.588 "nvme_io_md": false, 00:10:11.588 "write_zeroes": true, 00:10:11.588 "zcopy": true, 00:10:11.588 "get_zone_info": false, 00:10:11.588 "zone_management": false, 00:10:11.588 "zone_append": false, 00:10:11.588 "compare": false, 00:10:11.588 "compare_and_write": false, 00:10:11.588 "abort": true, 00:10:11.588 "seek_hole": false, 00:10:11.588 "seek_data": false, 00:10:11.588 "copy": true, 00:10:11.588 "nvme_iov_md": false 00:10:11.588 }, 00:10:11.588 "memory_domains": [ 00:10:11.588 { 00:10:11.588 "dma_device_id": "system", 00:10:11.588 "dma_device_type": 1 00:10:11.588 }, 00:10:11.588 { 00:10:11.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.588 "dma_device_type": 2 00:10:11.588 } 00:10:11.588 ], 00:10:11.588 "driver_specific": {} 00:10:11.588 } 00:10:11.588 ] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.588 BaseBdev3 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.588 [ 00:10:11.588 { 00:10:11.588 "name": "BaseBdev3", 00:10:11.588 "aliases": [ 00:10:11.588 "3cffc60e-1cdb-4808-8bc5-011608126624" 00:10:11.588 ], 00:10:11.588 "product_name": "Malloc disk", 00:10:11.588 "block_size": 512, 00:10:11.588 "num_blocks": 65536, 00:10:11.588 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:11.588 "assigned_rate_limits": { 00:10:11.588 "rw_ios_per_sec": 0, 00:10:11.588 "rw_mbytes_per_sec": 0, 00:10:11.588 "r_mbytes_per_sec": 0, 00:10:11.588 "w_mbytes_per_sec": 0 00:10:11.588 }, 00:10:11.588 "claimed": false, 00:10:11.588 "zoned": false, 00:10:11.588 "supported_io_types": { 00:10:11.588 "read": true, 00:10:11.588 "write": true, 00:10:11.588 "unmap": true, 00:10:11.588 "flush": true, 00:10:11.588 "reset": true, 00:10:11.588 "nvme_admin": false, 00:10:11.588 "nvme_io": false, 00:10:11.588 "nvme_io_md": false, 00:10:11.588 "write_zeroes": true, 00:10:11.588 "zcopy": true, 00:10:11.588 "get_zone_info": false, 00:10:11.588 "zone_management": false, 00:10:11.588 "zone_append": false, 00:10:11.588 "compare": false, 00:10:11.588 "compare_and_write": false, 00:10:11.588 "abort": true, 00:10:11.588 "seek_hole": false, 00:10:11.588 "seek_data": false, 00:10:11.588 "copy": true, 00:10:11.588 "nvme_iov_md": false 00:10:11.588 }, 00:10:11.588 "memory_domains": [ 00:10:11.588 { 00:10:11.588 "dma_device_id": "system", 00:10:11.588 "dma_device_type": 1 00:10:11.588 }, 00:10:11.588 { 00:10:11.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.588 "dma_device_type": 2 00:10:11.588 } 00:10:11.588 ], 00:10:11.588 "driver_specific": {} 00:10:11.588 } 00:10:11.588 ] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.588 [2024-11-20 14:27:12.611763] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.588 [2024-11-20 14:27:12.611949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.588 [2024-11-20 14:27:12.612095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.588 [2024-11-20 14:27:12.614762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.588 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.847 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.847 "name": "Existed_Raid", 00:10:11.847 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:11.847 "strip_size_kb": 64, 00:10:11.847 "state": "configuring", 00:10:11.847 "raid_level": "concat", 00:10:11.847 "superblock": true, 00:10:11.847 "num_base_bdevs": 3, 00:10:11.847 "num_base_bdevs_discovered": 2, 00:10:11.847 "num_base_bdevs_operational": 3, 00:10:11.847 "base_bdevs_list": [ 00:10:11.847 { 00:10:11.847 "name": "BaseBdev1", 00:10:11.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.847 "is_configured": false, 00:10:11.847 "data_offset": 0, 00:10:11.847 "data_size": 0 00:10:11.847 }, 00:10:11.847 { 00:10:11.847 "name": "BaseBdev2", 00:10:11.847 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:11.847 "is_configured": true, 00:10:11.847 "data_offset": 2048, 00:10:11.847 "data_size": 63488 00:10:11.847 }, 00:10:11.847 { 00:10:11.847 "name": "BaseBdev3", 00:10:11.847 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:11.847 "is_configured": true, 00:10:11.847 "data_offset": 2048, 00:10:11.847 "data_size": 63488 00:10:11.847 } 00:10:11.847 ] 00:10:11.847 }' 00:10:11.847 14:27:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.847 14:27:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.104 [2024-11-20 14:27:13.111886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.104 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.446 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.446 "name": "Existed_Raid", 00:10:12.446 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:12.446 "strip_size_kb": 64, 00:10:12.446 "state": "configuring", 00:10:12.446 "raid_level": "concat", 00:10:12.446 "superblock": true, 00:10:12.446 "num_base_bdevs": 3, 00:10:12.446 "num_base_bdevs_discovered": 1, 00:10:12.446 "num_base_bdevs_operational": 3, 00:10:12.446 "base_bdevs_list": [ 00:10:12.446 { 00:10:12.446 "name": "BaseBdev1", 00:10:12.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.446 "is_configured": false, 00:10:12.446 "data_offset": 0, 00:10:12.446 "data_size": 0 00:10:12.446 }, 00:10:12.446 { 00:10:12.446 "name": null, 00:10:12.446 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:12.446 "is_configured": false, 00:10:12.446 "data_offset": 0, 00:10:12.446 "data_size": 63488 00:10:12.446 }, 00:10:12.446 { 00:10:12.446 "name": "BaseBdev3", 00:10:12.446 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:12.446 "is_configured": true, 00:10:12.446 "data_offset": 2048, 00:10:12.446 "data_size": 63488 00:10:12.446 } 00:10:12.446 ] 00:10:12.446 }' 00:10:12.446 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.446 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.711 [2024-11-20 14:27:13.702667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.711 BaseBdev1 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.711 [ 00:10:12.711 { 00:10:12.711 "name": "BaseBdev1", 00:10:12.711 "aliases": [ 00:10:12.711 "ffffe13a-0b49-4507-8ba6-bdbda7c80bff" 00:10:12.711 ], 00:10:12.711 "product_name": "Malloc disk", 00:10:12.711 "block_size": 512, 00:10:12.711 "num_blocks": 65536, 00:10:12.711 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:12.711 "assigned_rate_limits": { 00:10:12.711 "rw_ios_per_sec": 0, 00:10:12.711 "rw_mbytes_per_sec": 0, 00:10:12.711 "r_mbytes_per_sec": 0, 00:10:12.711 "w_mbytes_per_sec": 0 00:10:12.711 }, 00:10:12.711 "claimed": true, 00:10:12.711 "claim_type": "exclusive_write", 00:10:12.711 "zoned": false, 00:10:12.711 "supported_io_types": { 00:10:12.711 "read": true, 00:10:12.711 "write": true, 00:10:12.711 "unmap": true, 00:10:12.711 "flush": true, 00:10:12.711 "reset": true, 00:10:12.711 "nvme_admin": false, 00:10:12.711 "nvme_io": false, 00:10:12.711 "nvme_io_md": false, 00:10:12.711 "write_zeroes": true, 00:10:12.711 "zcopy": true, 00:10:12.711 "get_zone_info": false, 00:10:12.711 "zone_management": false, 00:10:12.711 "zone_append": false, 00:10:12.711 "compare": false, 00:10:12.711 "compare_and_write": false, 00:10:12.711 "abort": true, 00:10:12.711 "seek_hole": false, 00:10:12.711 "seek_data": false, 00:10:12.711 "copy": true, 00:10:12.711 "nvme_iov_md": false 00:10:12.711 }, 00:10:12.711 "memory_domains": [ 00:10:12.711 { 00:10:12.711 "dma_device_id": "system", 00:10:12.711 "dma_device_type": 1 00:10:12.711 }, 00:10:12.711 { 00:10:12.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.711 "dma_device_type": 2 00:10:12.711 } 00:10:12.711 ], 00:10:12.711 "driver_specific": {} 00:10:12.711 } 00:10:12.711 ] 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.711 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.712 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.712 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.712 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.969 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.969 "name": "Existed_Raid", 00:10:12.969 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:12.969 "strip_size_kb": 64, 00:10:12.969 "state": "configuring", 00:10:12.969 "raid_level": "concat", 00:10:12.969 "superblock": true, 00:10:12.969 "num_base_bdevs": 3, 00:10:12.969 "num_base_bdevs_discovered": 2, 00:10:12.969 "num_base_bdevs_operational": 3, 00:10:12.969 "base_bdevs_list": [ 00:10:12.969 { 00:10:12.969 "name": "BaseBdev1", 00:10:12.969 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:12.969 "is_configured": true, 00:10:12.969 "data_offset": 2048, 00:10:12.969 "data_size": 63488 00:10:12.969 }, 00:10:12.969 { 00:10:12.969 "name": null, 00:10:12.969 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:12.969 "is_configured": false, 00:10:12.969 "data_offset": 0, 00:10:12.969 "data_size": 63488 00:10:12.969 }, 00:10:12.969 { 00:10:12.969 "name": "BaseBdev3", 00:10:12.969 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:12.969 "is_configured": true, 00:10:12.969 "data_offset": 2048, 00:10:12.969 "data_size": 63488 00:10:12.969 } 00:10:12.969 ] 00:10:12.969 }' 00:10:12.969 14:27:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.970 14:27:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.229 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.229 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.229 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.229 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.229 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.488 [2024-11-20 14:27:14.306908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.488 "name": "Existed_Raid", 00:10:13.488 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:13.488 "strip_size_kb": 64, 00:10:13.488 "state": "configuring", 00:10:13.488 "raid_level": "concat", 00:10:13.488 "superblock": true, 00:10:13.488 "num_base_bdevs": 3, 00:10:13.488 "num_base_bdevs_discovered": 1, 00:10:13.488 "num_base_bdevs_operational": 3, 00:10:13.488 "base_bdevs_list": [ 00:10:13.488 { 00:10:13.488 "name": "BaseBdev1", 00:10:13.488 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:13.488 "is_configured": true, 00:10:13.488 "data_offset": 2048, 00:10:13.488 "data_size": 63488 00:10:13.488 }, 00:10:13.488 { 00:10:13.488 "name": null, 00:10:13.488 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:13.488 "is_configured": false, 00:10:13.488 "data_offset": 0, 00:10:13.488 "data_size": 63488 00:10:13.488 }, 00:10:13.488 { 00:10:13.488 "name": null, 00:10:13.488 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:13.488 "is_configured": false, 00:10:13.488 "data_offset": 0, 00:10:13.488 "data_size": 63488 00:10:13.488 } 00:10:13.488 ] 00:10:13.488 }' 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.488 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.055 [2024-11-20 14:27:14.879101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.055 "name": "Existed_Raid", 00:10:14.055 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:14.055 "strip_size_kb": 64, 00:10:14.055 "state": "configuring", 00:10:14.055 "raid_level": "concat", 00:10:14.055 "superblock": true, 00:10:14.055 "num_base_bdevs": 3, 00:10:14.055 "num_base_bdevs_discovered": 2, 00:10:14.055 "num_base_bdevs_operational": 3, 00:10:14.055 "base_bdevs_list": [ 00:10:14.055 { 00:10:14.055 "name": "BaseBdev1", 00:10:14.055 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:14.055 "is_configured": true, 00:10:14.055 "data_offset": 2048, 00:10:14.055 "data_size": 63488 00:10:14.055 }, 00:10:14.055 { 00:10:14.055 "name": null, 00:10:14.055 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:14.055 "is_configured": false, 00:10:14.055 "data_offset": 0, 00:10:14.055 "data_size": 63488 00:10:14.055 }, 00:10:14.055 { 00:10:14.055 "name": "BaseBdev3", 00:10:14.055 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:14.055 "is_configured": true, 00:10:14.055 "data_offset": 2048, 00:10:14.055 "data_size": 63488 00:10:14.055 } 00:10:14.055 ] 00:10:14.055 }' 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.055 14:27:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.622 [2024-11-20 14:27:15.479293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.622 "name": "Existed_Raid", 00:10:14.622 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:14.622 "strip_size_kb": 64, 00:10:14.622 "state": "configuring", 00:10:14.622 "raid_level": "concat", 00:10:14.622 "superblock": true, 00:10:14.622 "num_base_bdevs": 3, 00:10:14.622 "num_base_bdevs_discovered": 1, 00:10:14.622 "num_base_bdevs_operational": 3, 00:10:14.622 "base_bdevs_list": [ 00:10:14.622 { 00:10:14.622 "name": null, 00:10:14.622 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:14.622 "is_configured": false, 00:10:14.622 "data_offset": 0, 00:10:14.622 "data_size": 63488 00:10:14.622 }, 00:10:14.622 { 00:10:14.622 "name": null, 00:10:14.622 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:14.622 "is_configured": false, 00:10:14.622 "data_offset": 0, 00:10:14.622 "data_size": 63488 00:10:14.622 }, 00:10:14.622 { 00:10:14.622 "name": "BaseBdev3", 00:10:14.622 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:14.622 "is_configured": true, 00:10:14.622 "data_offset": 2048, 00:10:14.622 "data_size": 63488 00:10:14.622 } 00:10:14.622 ] 00:10:14.622 }' 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.622 14:27:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.189 [2024-11-20 14:27:16.088545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.189 "name": "Existed_Raid", 00:10:15.189 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:15.189 "strip_size_kb": 64, 00:10:15.189 "state": "configuring", 00:10:15.189 "raid_level": "concat", 00:10:15.189 "superblock": true, 00:10:15.189 "num_base_bdevs": 3, 00:10:15.189 "num_base_bdevs_discovered": 2, 00:10:15.189 "num_base_bdevs_operational": 3, 00:10:15.189 "base_bdevs_list": [ 00:10:15.189 { 00:10:15.189 "name": null, 00:10:15.189 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:15.189 "is_configured": false, 00:10:15.189 "data_offset": 0, 00:10:15.189 "data_size": 63488 00:10:15.189 }, 00:10:15.189 { 00:10:15.189 "name": "BaseBdev2", 00:10:15.189 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:15.189 "is_configured": true, 00:10:15.189 "data_offset": 2048, 00:10:15.189 "data_size": 63488 00:10:15.189 }, 00:10:15.189 { 00:10:15.189 "name": "BaseBdev3", 00:10:15.189 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:15.189 "is_configured": true, 00:10:15.189 "data_offset": 2048, 00:10:15.189 "data_size": 63488 00:10:15.189 } 00:10:15.189 ] 00:10:15.189 }' 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.189 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ffffe13a-0b49-4507-8ba6-bdbda7c80bff 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.756 [2024-11-20 14:27:16.739350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.756 [2024-11-20 14:27:16.739705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:15.756 [2024-11-20 14:27:16.739749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:15.756 [2024-11-20 14:27:16.740076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:15.756 NewBaseBdev 00:10:15.756 [2024-11-20 14:27:16.740265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:15.756 [2024-11-20 14:27:16.740290] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:15.756 [2024-11-20 14:27:16.740466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.756 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.756 [ 00:10:15.756 { 00:10:15.756 "name": "NewBaseBdev", 00:10:15.756 "aliases": [ 00:10:15.756 "ffffe13a-0b49-4507-8ba6-bdbda7c80bff" 00:10:15.756 ], 00:10:15.756 "product_name": "Malloc disk", 00:10:15.756 "block_size": 512, 00:10:15.756 "num_blocks": 65536, 00:10:15.756 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:15.756 "assigned_rate_limits": { 00:10:15.756 "rw_ios_per_sec": 0, 00:10:15.756 "rw_mbytes_per_sec": 0, 00:10:15.756 "r_mbytes_per_sec": 0, 00:10:15.756 "w_mbytes_per_sec": 0 00:10:15.756 }, 00:10:15.756 "claimed": true, 00:10:15.756 "claim_type": "exclusive_write", 00:10:15.756 "zoned": false, 00:10:15.756 "supported_io_types": { 00:10:15.756 "read": true, 00:10:15.756 "write": true, 00:10:15.756 "unmap": true, 00:10:15.756 "flush": true, 00:10:15.756 "reset": true, 00:10:15.756 "nvme_admin": false, 00:10:15.756 "nvme_io": false, 00:10:15.756 "nvme_io_md": false, 00:10:15.756 "write_zeroes": true, 00:10:15.756 "zcopy": true, 00:10:15.756 "get_zone_info": false, 00:10:15.757 "zone_management": false, 00:10:15.757 "zone_append": false, 00:10:15.757 "compare": false, 00:10:15.757 "compare_and_write": false, 00:10:15.757 "abort": true, 00:10:15.757 "seek_hole": false, 00:10:15.757 "seek_data": false, 00:10:15.757 "copy": true, 00:10:15.757 "nvme_iov_md": false 00:10:15.757 }, 00:10:15.757 "memory_domains": [ 00:10:15.757 { 00:10:15.757 "dma_device_id": "system", 00:10:15.757 "dma_device_type": 1 00:10:15.757 }, 00:10:15.757 { 00:10:15.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.757 "dma_device_type": 2 00:10:15.757 } 00:10:15.757 ], 00:10:15.757 "driver_specific": {} 00:10:15.757 } 00:10:15.757 ] 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.757 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.015 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.015 "name": "Existed_Raid", 00:10:16.015 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:16.015 "strip_size_kb": 64, 00:10:16.015 "state": "online", 00:10:16.015 "raid_level": "concat", 00:10:16.015 "superblock": true, 00:10:16.015 "num_base_bdevs": 3, 00:10:16.015 "num_base_bdevs_discovered": 3, 00:10:16.015 "num_base_bdevs_operational": 3, 00:10:16.015 "base_bdevs_list": [ 00:10:16.015 { 00:10:16.015 "name": "NewBaseBdev", 00:10:16.015 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:16.015 "is_configured": true, 00:10:16.015 "data_offset": 2048, 00:10:16.015 "data_size": 63488 00:10:16.015 }, 00:10:16.015 { 00:10:16.015 "name": "BaseBdev2", 00:10:16.015 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:16.015 "is_configured": true, 00:10:16.015 "data_offset": 2048, 00:10:16.015 "data_size": 63488 00:10:16.015 }, 00:10:16.015 { 00:10:16.015 "name": "BaseBdev3", 00:10:16.015 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:16.015 "is_configured": true, 00:10:16.015 "data_offset": 2048, 00:10:16.015 "data_size": 63488 00:10:16.015 } 00:10:16.015 ] 00:10:16.015 }' 00:10:16.015 14:27:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.015 14:27:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.273 [2024-11-20 14:27:17.299954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.273 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.540 "name": "Existed_Raid", 00:10:16.540 "aliases": [ 00:10:16.540 "a07bf285-7929-4728-84be-2d3684aa3640" 00:10:16.540 ], 00:10:16.540 "product_name": "Raid Volume", 00:10:16.540 "block_size": 512, 00:10:16.540 "num_blocks": 190464, 00:10:16.540 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:16.540 "assigned_rate_limits": { 00:10:16.540 "rw_ios_per_sec": 0, 00:10:16.540 "rw_mbytes_per_sec": 0, 00:10:16.540 "r_mbytes_per_sec": 0, 00:10:16.540 "w_mbytes_per_sec": 0 00:10:16.540 }, 00:10:16.540 "claimed": false, 00:10:16.540 "zoned": false, 00:10:16.540 "supported_io_types": { 00:10:16.540 "read": true, 00:10:16.540 "write": true, 00:10:16.540 "unmap": true, 00:10:16.540 "flush": true, 00:10:16.540 "reset": true, 00:10:16.540 "nvme_admin": false, 00:10:16.540 "nvme_io": false, 00:10:16.540 "nvme_io_md": false, 00:10:16.540 "write_zeroes": true, 00:10:16.540 "zcopy": false, 00:10:16.540 "get_zone_info": false, 00:10:16.540 "zone_management": false, 00:10:16.540 "zone_append": false, 00:10:16.540 "compare": false, 00:10:16.540 "compare_and_write": false, 00:10:16.540 "abort": false, 00:10:16.540 "seek_hole": false, 00:10:16.540 "seek_data": false, 00:10:16.540 "copy": false, 00:10:16.540 "nvme_iov_md": false 00:10:16.540 }, 00:10:16.540 "memory_domains": [ 00:10:16.540 { 00:10:16.540 "dma_device_id": "system", 00:10:16.540 "dma_device_type": 1 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.540 "dma_device_type": 2 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "dma_device_id": "system", 00:10:16.540 "dma_device_type": 1 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.540 "dma_device_type": 2 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "dma_device_id": "system", 00:10:16.540 "dma_device_type": 1 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.540 "dma_device_type": 2 00:10:16.540 } 00:10:16.540 ], 00:10:16.540 "driver_specific": { 00:10:16.540 "raid": { 00:10:16.540 "uuid": "a07bf285-7929-4728-84be-2d3684aa3640", 00:10:16.540 "strip_size_kb": 64, 00:10:16.540 "state": "online", 00:10:16.540 "raid_level": "concat", 00:10:16.540 "superblock": true, 00:10:16.540 "num_base_bdevs": 3, 00:10:16.540 "num_base_bdevs_discovered": 3, 00:10:16.540 "num_base_bdevs_operational": 3, 00:10:16.540 "base_bdevs_list": [ 00:10:16.540 { 00:10:16.540 "name": "NewBaseBdev", 00:10:16.540 "uuid": "ffffe13a-0b49-4507-8ba6-bdbda7c80bff", 00:10:16.540 "is_configured": true, 00:10:16.540 "data_offset": 2048, 00:10:16.540 "data_size": 63488 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "name": "BaseBdev2", 00:10:16.540 "uuid": "ff7fb987-3c67-4b7e-978c-8acee06cb9c0", 00:10:16.540 "is_configured": true, 00:10:16.540 "data_offset": 2048, 00:10:16.540 "data_size": 63488 00:10:16.540 }, 00:10:16.540 { 00:10:16.540 "name": "BaseBdev3", 00:10:16.540 "uuid": "3cffc60e-1cdb-4808-8bc5-011608126624", 00:10:16.540 "is_configured": true, 00:10:16.540 "data_offset": 2048, 00:10:16.540 "data_size": 63488 00:10:16.540 } 00:10:16.540 ] 00:10:16.540 } 00:10:16.540 } 00:10:16.540 }' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.540 BaseBdev2 00:10:16.540 BaseBdev3' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.540 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.813 [2024-11-20 14:27:17.599671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.813 [2024-11-20 14:27:17.599710] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.813 [2024-11-20 14:27:17.599831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.813 [2024-11-20 14:27:17.599913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.813 [2024-11-20 14:27:17.599936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66343 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66343 ']' 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66343 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66343 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66343' 00:10:16.813 killing process with pid 66343 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66343 00:10:16.813 [2024-11-20 14:27:17.644755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.813 14:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66343 00:10:17.071 [2024-11-20 14:27:17.920212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.006 14:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:18.006 00:10:18.006 real 0m11.907s 00:10:18.006 user 0m19.633s 00:10:18.006 sys 0m1.708s 00:10:18.006 14:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.006 14:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.006 ************************************ 00:10:18.006 END TEST raid_state_function_test_sb 00:10:18.006 ************************************ 00:10:18.006 14:27:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:18.006 14:27:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.006 14:27:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.006 14:27:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.006 ************************************ 00:10:18.006 START TEST raid_superblock_test 00:10:18.006 ************************************ 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66974 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66974 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66974 ']' 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.006 14:27:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.268 [2024-11-20 14:27:19.171900] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:10:18.268 [2024-11-20 14:27:19.172145] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66974 ] 00:10:18.530 [2024-11-20 14:27:19.361564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.530 [2024-11-20 14:27:19.495545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.788 [2024-11-20 14:27:19.702223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.788 [2024-11-20 14:27:19.702540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.355 malloc1 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.355 [2024-11-20 14:27:20.287287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:19.355 [2024-11-20 14:27:20.287384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.355 [2024-11-20 14:27:20.287423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:19.355 [2024-11-20 14:27:20.287441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.355 [2024-11-20 14:27:20.290600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.355 [2024-11-20 14:27:20.290691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:19.355 pt1 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.355 malloc2 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.355 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.356 [2024-11-20 14:27:20.344081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.356 [2024-11-20 14:27:20.344158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.356 [2024-11-20 14:27:20.344199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:19.356 [2024-11-20 14:27:20.344215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.356 [2024-11-20 14:27:20.347100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.356 [2024-11-20 14:27:20.347148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.356 pt2 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.356 malloc3 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.356 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.356 [2024-11-20 14:27:20.408529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.356 [2024-11-20 14:27:20.408774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.356 [2024-11-20 14:27:20.408939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:19.356 [2024-11-20 14:27:20.409065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.614 [2024-11-20 14:27:20.412214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.614 [2024-11-20 14:27:20.412372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.614 pt3 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.614 [2024-11-20 14:27:20.420777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:19.614 [2024-11-20 14:27:20.423478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.614 [2024-11-20 14:27:20.423720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.614 [2024-11-20 14:27:20.424121] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:19.614 [2024-11-20 14:27:20.424261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:19.614 [2024-11-20 14:27:20.424803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:19.614 [2024-11-20 14:27:20.425159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:19.614 [2024-11-20 14:27:20.425285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:19.614 [2024-11-20 14:27:20.425762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:19.614 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.615 "name": "raid_bdev1", 00:10:19.615 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:19.615 "strip_size_kb": 64, 00:10:19.615 "state": "online", 00:10:19.615 "raid_level": "concat", 00:10:19.615 "superblock": true, 00:10:19.615 "num_base_bdevs": 3, 00:10:19.615 "num_base_bdevs_discovered": 3, 00:10:19.615 "num_base_bdevs_operational": 3, 00:10:19.615 "base_bdevs_list": [ 00:10:19.615 { 00:10:19.615 "name": "pt1", 00:10:19.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.615 "is_configured": true, 00:10:19.615 "data_offset": 2048, 00:10:19.615 "data_size": 63488 00:10:19.615 }, 00:10:19.615 { 00:10:19.615 "name": "pt2", 00:10:19.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.615 "is_configured": true, 00:10:19.615 "data_offset": 2048, 00:10:19.615 "data_size": 63488 00:10:19.615 }, 00:10:19.615 { 00:10:19.615 "name": "pt3", 00:10:19.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.615 "is_configured": true, 00:10:19.615 "data_offset": 2048, 00:10:19.615 "data_size": 63488 00:10:19.615 } 00:10:19.615 ] 00:10:19.615 }' 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.615 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.874 [2024-11-20 14:27:20.914233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.133 14:27:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.133 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.133 "name": "raid_bdev1", 00:10:20.133 "aliases": [ 00:10:20.133 "29de6444-3c6e-42a3-8fee-a862b03318b0" 00:10:20.133 ], 00:10:20.133 "product_name": "Raid Volume", 00:10:20.133 "block_size": 512, 00:10:20.133 "num_blocks": 190464, 00:10:20.133 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:20.133 "assigned_rate_limits": { 00:10:20.133 "rw_ios_per_sec": 0, 00:10:20.133 "rw_mbytes_per_sec": 0, 00:10:20.133 "r_mbytes_per_sec": 0, 00:10:20.133 "w_mbytes_per_sec": 0 00:10:20.133 }, 00:10:20.133 "claimed": false, 00:10:20.133 "zoned": false, 00:10:20.133 "supported_io_types": { 00:10:20.133 "read": true, 00:10:20.133 "write": true, 00:10:20.133 "unmap": true, 00:10:20.133 "flush": true, 00:10:20.133 "reset": true, 00:10:20.133 "nvme_admin": false, 00:10:20.133 "nvme_io": false, 00:10:20.133 "nvme_io_md": false, 00:10:20.133 "write_zeroes": true, 00:10:20.133 "zcopy": false, 00:10:20.133 "get_zone_info": false, 00:10:20.133 "zone_management": false, 00:10:20.133 "zone_append": false, 00:10:20.133 "compare": false, 00:10:20.133 "compare_and_write": false, 00:10:20.133 "abort": false, 00:10:20.133 "seek_hole": false, 00:10:20.133 "seek_data": false, 00:10:20.133 "copy": false, 00:10:20.133 "nvme_iov_md": false 00:10:20.133 }, 00:10:20.133 "memory_domains": [ 00:10:20.133 { 00:10:20.133 "dma_device_id": "system", 00:10:20.133 "dma_device_type": 1 00:10:20.133 }, 00:10:20.133 { 00:10:20.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.133 "dma_device_type": 2 00:10:20.133 }, 00:10:20.133 { 00:10:20.133 "dma_device_id": "system", 00:10:20.133 "dma_device_type": 1 00:10:20.133 }, 00:10:20.133 { 00:10:20.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.133 "dma_device_type": 2 00:10:20.133 }, 00:10:20.133 { 00:10:20.133 "dma_device_id": "system", 00:10:20.133 "dma_device_type": 1 00:10:20.133 }, 00:10:20.133 { 00:10:20.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.133 "dma_device_type": 2 00:10:20.133 } 00:10:20.133 ], 00:10:20.133 "driver_specific": { 00:10:20.133 "raid": { 00:10:20.133 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:20.133 "strip_size_kb": 64, 00:10:20.133 "state": "online", 00:10:20.133 "raid_level": "concat", 00:10:20.133 "superblock": true, 00:10:20.133 "num_base_bdevs": 3, 00:10:20.133 "num_base_bdevs_discovered": 3, 00:10:20.133 "num_base_bdevs_operational": 3, 00:10:20.133 "base_bdevs_list": [ 00:10:20.133 { 00:10:20.133 "name": "pt1", 00:10:20.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.133 "is_configured": true, 00:10:20.133 "data_offset": 2048, 00:10:20.133 "data_size": 63488 00:10:20.133 }, 00:10:20.133 { 00:10:20.133 "name": "pt2", 00:10:20.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.133 "is_configured": true, 00:10:20.133 "data_offset": 2048, 00:10:20.133 "data_size": 63488 00:10:20.133 }, 00:10:20.133 { 00:10:20.133 "name": "pt3", 00:10:20.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.133 "is_configured": true, 00:10:20.133 "data_offset": 2048, 00:10:20.133 "data_size": 63488 00:10:20.133 } 00:10:20.133 ] 00:10:20.133 } 00:10:20.133 } 00:10:20.133 }' 00:10:20.133 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.133 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.133 pt2 00:10:20.133 pt3' 00:10:20.133 14:27:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.133 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 [2024-11-20 14:27:21.218217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=29de6444-3c6e-42a3-8fee-a862b03318b0 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 29de6444-3c6e-42a3-8fee-a862b03318b0 ']' 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 [2024-11-20 14:27:21.273911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.392 [2024-11-20 14:27:21.274086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.392 [2024-11-20 14:27:21.274229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.392 [2024-11-20 14:27:21.274331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.392 [2024-11-20 14:27:21.274348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 [2024-11-20 14:27:21.426057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:20.392 [2024-11-20 14:27:21.428661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:20.392 [2024-11-20 14:27:21.428735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:20.392 [2024-11-20 14:27:21.428820] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:20.392 [2024-11-20 14:27:21.428922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:20.392 [2024-11-20 14:27:21.428972] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:20.392 [2024-11-20 14:27:21.429002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.392 [2024-11-20 14:27:21.429017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:20.392 request: 00:10:20.392 { 00:10:20.392 "name": "raid_bdev1", 00:10:20.392 "raid_level": "concat", 00:10:20.392 "base_bdevs": [ 00:10:20.392 "malloc1", 00:10:20.392 "malloc2", 00:10:20.392 "malloc3" 00:10:20.392 ], 00:10:20.392 "strip_size_kb": 64, 00:10:20.392 "superblock": false, 00:10:20.392 "method": "bdev_raid_create", 00:10:20.392 "req_id": 1 00:10:20.392 } 00:10:20.392 Got JSON-RPC error response 00:10:20.392 response: 00:10:20.392 { 00:10:20.392 "code": -17, 00:10:20.392 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:20.392 } 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.392 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.651 [2024-11-20 14:27:21.498119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.651 [2024-11-20 14:27:21.498214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.651 [2024-11-20 14:27:21.498251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:20.651 [2024-11-20 14:27:21.498267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.651 [2024-11-20 14:27:21.501462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.651 [2024-11-20 14:27:21.501652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.651 [2024-11-20 14:27:21.501810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:20.651 [2024-11-20 14:27:21.501891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.651 pt1 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.651 "name": "raid_bdev1", 00:10:20.651 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:20.651 "strip_size_kb": 64, 00:10:20.651 "state": "configuring", 00:10:20.651 "raid_level": "concat", 00:10:20.651 "superblock": true, 00:10:20.651 "num_base_bdevs": 3, 00:10:20.651 "num_base_bdevs_discovered": 1, 00:10:20.651 "num_base_bdevs_operational": 3, 00:10:20.651 "base_bdevs_list": [ 00:10:20.651 { 00:10:20.651 "name": "pt1", 00:10:20.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.651 "is_configured": true, 00:10:20.651 "data_offset": 2048, 00:10:20.651 "data_size": 63488 00:10:20.651 }, 00:10:20.651 { 00:10:20.651 "name": null, 00:10:20.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.651 "is_configured": false, 00:10:20.651 "data_offset": 2048, 00:10:20.651 "data_size": 63488 00:10:20.651 }, 00:10:20.651 { 00:10:20.651 "name": null, 00:10:20.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.651 "is_configured": false, 00:10:20.651 "data_offset": 2048, 00:10:20.651 "data_size": 63488 00:10:20.651 } 00:10:20.651 ] 00:10:20.651 }' 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.651 14:27:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.267 [2024-11-20 14:27:22.050357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.267 [2024-11-20 14:27:22.050480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.267 [2024-11-20 14:27:22.050527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:21.267 [2024-11-20 14:27:22.050545] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.267 [2024-11-20 14:27:22.051205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.267 [2024-11-20 14:27:22.051241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.267 [2024-11-20 14:27:22.051378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.267 [2024-11-20 14:27:22.051422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.267 pt2 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.267 [2024-11-20 14:27:22.058309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.267 "name": "raid_bdev1", 00:10:21.267 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:21.267 "strip_size_kb": 64, 00:10:21.267 "state": "configuring", 00:10:21.267 "raid_level": "concat", 00:10:21.267 "superblock": true, 00:10:21.267 "num_base_bdevs": 3, 00:10:21.267 "num_base_bdevs_discovered": 1, 00:10:21.267 "num_base_bdevs_operational": 3, 00:10:21.267 "base_bdevs_list": [ 00:10:21.267 { 00:10:21.267 "name": "pt1", 00:10:21.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.267 "is_configured": true, 00:10:21.267 "data_offset": 2048, 00:10:21.267 "data_size": 63488 00:10:21.267 }, 00:10:21.267 { 00:10:21.267 "name": null, 00:10:21.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.267 "is_configured": false, 00:10:21.267 "data_offset": 0, 00:10:21.267 "data_size": 63488 00:10:21.267 }, 00:10:21.267 { 00:10:21.267 "name": null, 00:10:21.267 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.267 "is_configured": false, 00:10:21.267 "data_offset": 2048, 00:10:21.267 "data_size": 63488 00:10:21.267 } 00:10:21.267 ] 00:10:21.267 }' 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.267 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.569 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:21.569 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.569 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.569 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.569 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.569 [2024-11-20 14:27:22.586457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.570 [2024-11-20 14:27:22.586558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.570 [2024-11-20 14:27:22.586589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:21.570 [2024-11-20 14:27:22.586608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.570 [2024-11-20 14:27:22.587264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.570 [2024-11-20 14:27:22.587305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.570 [2024-11-20 14:27:22.587421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.570 [2024-11-20 14:27:22.587463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.570 pt2 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.570 [2024-11-20 14:27:22.598503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.570 [2024-11-20 14:27:22.598784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.570 [2024-11-20 14:27:22.598995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:21.570 [2024-11-20 14:27:22.599044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.570 [2024-11-20 14:27:22.599769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.570 [2024-11-20 14:27:22.599827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.570 [2024-11-20 14:27:22.599963] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.570 [2024-11-20 14:27:22.600015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.570 [2024-11-20 14:27:22.600193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:21.570 [2024-11-20 14:27:22.600216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:21.570 [2024-11-20 14:27:22.600538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:21.570 [2024-11-20 14:27:22.600763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:21.570 [2024-11-20 14:27:22.600779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:21.570 [2024-11-20 14:27:22.600958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.570 pt3 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.570 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.828 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.828 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.828 "name": "raid_bdev1", 00:10:21.828 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:21.828 "strip_size_kb": 64, 00:10:21.828 "state": "online", 00:10:21.828 "raid_level": "concat", 00:10:21.828 "superblock": true, 00:10:21.828 "num_base_bdevs": 3, 00:10:21.828 "num_base_bdevs_discovered": 3, 00:10:21.828 "num_base_bdevs_operational": 3, 00:10:21.828 "base_bdevs_list": [ 00:10:21.828 { 00:10:21.828 "name": "pt1", 00:10:21.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.828 "is_configured": true, 00:10:21.828 "data_offset": 2048, 00:10:21.828 "data_size": 63488 00:10:21.828 }, 00:10:21.828 { 00:10:21.828 "name": "pt2", 00:10:21.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.828 "is_configured": true, 00:10:21.828 "data_offset": 2048, 00:10:21.828 "data_size": 63488 00:10:21.828 }, 00:10:21.828 { 00:10:21.828 "name": "pt3", 00:10:21.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.828 "is_configured": true, 00:10:21.828 "data_offset": 2048, 00:10:21.828 "data_size": 63488 00:10:21.828 } 00:10:21.828 ] 00:10:21.828 }' 00:10:21.828 14:27:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.828 14:27:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.396 [2024-11-20 14:27:23.151029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.396 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.396 "name": "raid_bdev1", 00:10:22.396 "aliases": [ 00:10:22.396 "29de6444-3c6e-42a3-8fee-a862b03318b0" 00:10:22.396 ], 00:10:22.396 "product_name": "Raid Volume", 00:10:22.396 "block_size": 512, 00:10:22.396 "num_blocks": 190464, 00:10:22.396 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:22.396 "assigned_rate_limits": { 00:10:22.396 "rw_ios_per_sec": 0, 00:10:22.396 "rw_mbytes_per_sec": 0, 00:10:22.396 "r_mbytes_per_sec": 0, 00:10:22.396 "w_mbytes_per_sec": 0 00:10:22.396 }, 00:10:22.396 "claimed": false, 00:10:22.396 "zoned": false, 00:10:22.396 "supported_io_types": { 00:10:22.396 "read": true, 00:10:22.396 "write": true, 00:10:22.396 "unmap": true, 00:10:22.396 "flush": true, 00:10:22.396 "reset": true, 00:10:22.396 "nvme_admin": false, 00:10:22.396 "nvme_io": false, 00:10:22.396 "nvme_io_md": false, 00:10:22.396 "write_zeroes": true, 00:10:22.396 "zcopy": false, 00:10:22.396 "get_zone_info": false, 00:10:22.396 "zone_management": false, 00:10:22.396 "zone_append": false, 00:10:22.396 "compare": false, 00:10:22.396 "compare_and_write": false, 00:10:22.396 "abort": false, 00:10:22.396 "seek_hole": false, 00:10:22.396 "seek_data": false, 00:10:22.396 "copy": false, 00:10:22.396 "nvme_iov_md": false 00:10:22.396 }, 00:10:22.396 "memory_domains": [ 00:10:22.396 { 00:10:22.396 "dma_device_id": "system", 00:10:22.396 "dma_device_type": 1 00:10:22.396 }, 00:10:22.396 { 00:10:22.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.396 "dma_device_type": 2 00:10:22.396 }, 00:10:22.396 { 00:10:22.396 "dma_device_id": "system", 00:10:22.396 "dma_device_type": 1 00:10:22.396 }, 00:10:22.396 { 00:10:22.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.396 "dma_device_type": 2 00:10:22.396 }, 00:10:22.397 { 00:10:22.397 "dma_device_id": "system", 00:10:22.397 "dma_device_type": 1 00:10:22.397 }, 00:10:22.397 { 00:10:22.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.397 "dma_device_type": 2 00:10:22.397 } 00:10:22.397 ], 00:10:22.397 "driver_specific": { 00:10:22.397 "raid": { 00:10:22.397 "uuid": "29de6444-3c6e-42a3-8fee-a862b03318b0", 00:10:22.397 "strip_size_kb": 64, 00:10:22.397 "state": "online", 00:10:22.397 "raid_level": "concat", 00:10:22.397 "superblock": true, 00:10:22.397 "num_base_bdevs": 3, 00:10:22.397 "num_base_bdevs_discovered": 3, 00:10:22.397 "num_base_bdevs_operational": 3, 00:10:22.397 "base_bdevs_list": [ 00:10:22.397 { 00:10:22.397 "name": "pt1", 00:10:22.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.397 "is_configured": true, 00:10:22.397 "data_offset": 2048, 00:10:22.397 "data_size": 63488 00:10:22.397 }, 00:10:22.397 { 00:10:22.397 "name": "pt2", 00:10:22.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.397 "is_configured": true, 00:10:22.397 "data_offset": 2048, 00:10:22.397 "data_size": 63488 00:10:22.397 }, 00:10:22.397 { 00:10:22.397 "name": "pt3", 00:10:22.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.397 "is_configured": true, 00:10:22.397 "data_offset": 2048, 00:10:22.397 "data_size": 63488 00:10:22.397 } 00:10:22.397 ] 00:10:22.397 } 00:10:22.397 } 00:10:22.397 }' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.397 pt2 00:10:22.397 pt3' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.397 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.656 [2024-11-20 14:27:23.487078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 29de6444-3c6e-42a3-8fee-a862b03318b0 '!=' 29de6444-3c6e-42a3-8fee-a862b03318b0 ']' 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66974 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66974 ']' 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66974 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66974 00:10:22.656 killing process with pid 66974 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66974' 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66974 00:10:22.656 [2024-11-20 14:27:23.567715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.656 14:27:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66974 00:10:22.656 [2024-11-20 14:27:23.567852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.656 [2024-11-20 14:27:23.567940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.656 [2024-11-20 14:27:23.567970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:22.914 [2024-11-20 14:27:23.844662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.290 14:27:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:24.290 00:10:24.290 real 0m5.871s 00:10:24.290 user 0m8.789s 00:10:24.290 sys 0m0.935s 00:10:24.290 14:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.290 ************************************ 00:10:24.290 END TEST raid_superblock_test 00:10:24.290 ************************************ 00:10:24.290 14:27:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.290 14:27:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:24.290 14:27:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.290 14:27:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.290 14:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.290 ************************************ 00:10:24.290 START TEST raid_read_error_test 00:10:24.290 ************************************ 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ihNKY8xZAk 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67237 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67237 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67237 ']' 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.290 14:27:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.290 [2024-11-20 14:27:25.099530] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:10:24.290 [2024-11-20 14:27:25.099955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67237 ] 00:10:24.290 [2024-11-20 14:27:25.298855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.549 [2024-11-20 14:27:25.465189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.807 [2024-11-20 14:27:25.718235] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.807 [2024-11-20 14:27:25.718321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 BaseBdev1_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 true 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 [2024-11-20 14:27:26.194092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.375 [2024-11-20 14:27:26.194174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.375 [2024-11-20 14:27:26.194209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.375 [2024-11-20 14:27:26.194228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.375 [2024-11-20 14:27:26.197570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.375 [2024-11-20 14:27:26.197644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.375 BaseBdev1 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 BaseBdev2_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 true 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 [2024-11-20 14:27:26.259664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.375 [2024-11-20 14:27:26.259746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.375 [2024-11-20 14:27:26.259779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.375 [2024-11-20 14:27:26.259797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.375 [2024-11-20 14:27:26.262974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.375 [2024-11-20 14:27:26.263037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.375 BaseBdev2 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 BaseBdev3_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 true 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.375 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.375 [2024-11-20 14:27:26.344073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.375 [2024-11-20 14:27:26.344150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.376 [2024-11-20 14:27:26.344181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:25.376 [2024-11-20 14:27:26.344199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.376 [2024-11-20 14:27:26.347509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.376 [2024-11-20 14:27:26.347581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.376 BaseBdev3 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.376 [2024-11-20 14:27:26.352164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.376 [2024-11-20 14:27:26.354924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.376 [2024-11-20 14:27:26.355034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.376 [2024-11-20 14:27:26.355336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:25.376 [2024-11-20 14:27:26.355356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.376 [2024-11-20 14:27:26.355851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:25.376 [2024-11-20 14:27:26.356126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:25.376 [2024-11-20 14:27:26.356191] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:25.376 [2024-11-20 14:27:26.356579] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.376 "name": "raid_bdev1", 00:10:25.376 "uuid": "f5d7fb35-55a5-42af-8c99-aed4b492fc0d", 00:10:25.376 "strip_size_kb": 64, 00:10:25.376 "state": "online", 00:10:25.376 "raid_level": "concat", 00:10:25.376 "superblock": true, 00:10:25.376 "num_base_bdevs": 3, 00:10:25.376 "num_base_bdevs_discovered": 3, 00:10:25.376 "num_base_bdevs_operational": 3, 00:10:25.376 "base_bdevs_list": [ 00:10:25.376 { 00:10:25.376 "name": "BaseBdev1", 00:10:25.376 "uuid": "8b45c988-37e4-5256-b30c-9e47c32717fa", 00:10:25.376 "is_configured": true, 00:10:25.376 "data_offset": 2048, 00:10:25.376 "data_size": 63488 00:10:25.376 }, 00:10:25.376 { 00:10:25.376 "name": "BaseBdev2", 00:10:25.376 "uuid": "40a284da-279c-5e44-8b46-d34f003d36ce", 00:10:25.376 "is_configured": true, 00:10:25.376 "data_offset": 2048, 00:10:25.376 "data_size": 63488 00:10:25.376 }, 00:10:25.376 { 00:10:25.376 "name": "BaseBdev3", 00:10:25.376 "uuid": "0cdc2214-ce8e-5365-a00e-7e08df33f42d", 00:10:25.376 "is_configured": true, 00:10:25.376 "data_offset": 2048, 00:10:25.376 "data_size": 63488 00:10:25.376 } 00:10:25.376 ] 00:10:25.376 }' 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.376 14:27:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.943 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.943 14:27:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.201 [2024-11-20 14:27:27.006324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.137 "name": "raid_bdev1", 00:10:27.137 "uuid": "f5d7fb35-55a5-42af-8c99-aed4b492fc0d", 00:10:27.137 "strip_size_kb": 64, 00:10:27.137 "state": "online", 00:10:27.137 "raid_level": "concat", 00:10:27.137 "superblock": true, 00:10:27.137 "num_base_bdevs": 3, 00:10:27.137 "num_base_bdevs_discovered": 3, 00:10:27.137 "num_base_bdevs_operational": 3, 00:10:27.137 "base_bdevs_list": [ 00:10:27.137 { 00:10:27.137 "name": "BaseBdev1", 00:10:27.137 "uuid": "8b45c988-37e4-5256-b30c-9e47c32717fa", 00:10:27.137 "is_configured": true, 00:10:27.137 "data_offset": 2048, 00:10:27.137 "data_size": 63488 00:10:27.137 }, 00:10:27.137 { 00:10:27.137 "name": "BaseBdev2", 00:10:27.137 "uuid": "40a284da-279c-5e44-8b46-d34f003d36ce", 00:10:27.137 "is_configured": true, 00:10:27.137 "data_offset": 2048, 00:10:27.137 "data_size": 63488 00:10:27.137 }, 00:10:27.137 { 00:10:27.137 "name": "BaseBdev3", 00:10:27.137 "uuid": "0cdc2214-ce8e-5365-a00e-7e08df33f42d", 00:10:27.137 "is_configured": true, 00:10:27.137 "data_offset": 2048, 00:10:27.137 "data_size": 63488 00:10:27.137 } 00:10:27.137 ] 00:10:27.137 }' 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.137 14:27:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.703 [2024-11-20 14:27:28.487189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.703 [2024-11-20 14:27:28.487363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.703 [2024-11-20 14:27:28.490865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.703 [2024-11-20 14:27:28.491050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.703 [2024-11-20 14:27:28.491125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.703 [2024-11-20 14:27:28.491146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:27.703 { 00:10:27.703 "results": [ 00:10:27.703 { 00:10:27.703 "job": "raid_bdev1", 00:10:27.703 "core_mask": "0x1", 00:10:27.703 "workload": "randrw", 00:10:27.703 "percentage": 50, 00:10:27.703 "status": "finished", 00:10:27.703 "queue_depth": 1, 00:10:27.703 "io_size": 131072, 00:10:27.703 "runtime": 1.478349, 00:10:27.703 "iops": 10071.370156843885, 00:10:27.703 "mibps": 1258.9212696054856, 00:10:27.703 "io_failed": 1, 00:10:27.703 "io_timeout": 0, 00:10:27.703 "avg_latency_us": 138.93661884119908, 00:10:27.703 "min_latency_us": 43.054545454545455, 00:10:27.703 "max_latency_us": 1966.08 00:10:27.703 } 00:10:27.703 ], 00:10:27.703 "core_count": 1 00:10:27.703 } 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67237 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67237 ']' 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67237 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:27.703 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.704 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67237 00:10:27.704 killing process with pid 67237 00:10:27.704 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.704 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.704 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67237' 00:10:27.704 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67237 00:10:27.704 [2024-11-20 14:27:28.525194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.704 14:27:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67237 00:10:27.704 [2024-11-20 14:27:28.743077] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ihNKY8xZAk 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:10:29.079 00:10:29.079 real 0m4.906s 00:10:29.079 user 0m6.162s 00:10:29.079 sys 0m0.581s 00:10:29.079 ************************************ 00:10:29.079 END TEST raid_read_error_test 00:10:29.079 ************************************ 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.079 14:27:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 14:27:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:29.079 14:27:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.079 14:27:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.079 14:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 ************************************ 00:10:29.079 START TEST raid_write_error_test 00:10:29.079 ************************************ 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kXHwC3p1kg 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67384 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67384 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67384 ']' 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.079 14:27:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.079 [2024-11-20 14:27:30.041673] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:10:29.079 [2024-11-20 14:27:30.041859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67384 ] 00:10:29.338 [2024-11-20 14:27:30.222541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.338 [2024-11-20 14:27:30.380002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.596 [2024-11-20 14:27:30.585664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.596 [2024-11-20 14:27:30.585708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.163 14:27:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 BaseBdev1_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 true 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 [2024-11-20 14:27:31.063447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.163 [2024-11-20 14:27:31.063554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.163 [2024-11-20 14:27:31.063588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.163 [2024-11-20 14:27:31.063609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.163 [2024-11-20 14:27:31.066571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.163 [2024-11-20 14:27:31.066623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.163 BaseBdev1 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 BaseBdev2_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 true 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 [2024-11-20 14:27:31.127840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.163 [2024-11-20 14:27:31.127939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.163 [2024-11-20 14:27:31.127967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.163 [2024-11-20 14:27:31.127986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.163 [2024-11-20 14:27:31.130973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.163 [2024-11-20 14:27:31.131023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.163 BaseBdev2 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 BaseBdev3_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 true 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 [2024-11-20 14:27:31.203699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.163 [2024-11-20 14:27:31.203798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.163 [2024-11-20 14:27:31.203828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:30.163 [2024-11-20 14:27:31.203847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.163 [2024-11-20 14:27:31.206880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.163 [2024-11-20 14:27:31.207198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.163 BaseBdev3 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.163 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.163 [2024-11-20 14:27:31.215979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.423 [2024-11-20 14:27:31.218599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.423 [2024-11-20 14:27:31.218940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.423 [2024-11-20 14:27:31.219254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.423 [2024-11-20 14:27:31.219275] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:30.423 [2024-11-20 14:27:31.219655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:30.423 [2024-11-20 14:27:31.219901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.423 [2024-11-20 14:27:31.219924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:30.423 [2024-11-20 14:27:31.220182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.423 "name": "raid_bdev1", 00:10:30.423 "uuid": "80d41caf-9f53-4026-b2b3-5bf4eb272772", 00:10:30.423 "strip_size_kb": 64, 00:10:30.423 "state": "online", 00:10:30.423 "raid_level": "concat", 00:10:30.423 "superblock": true, 00:10:30.423 "num_base_bdevs": 3, 00:10:30.423 "num_base_bdevs_discovered": 3, 00:10:30.423 "num_base_bdevs_operational": 3, 00:10:30.423 "base_bdevs_list": [ 00:10:30.423 { 00:10:30.423 "name": "BaseBdev1", 00:10:30.423 "uuid": "66492df4-b380-547b-8474-6df783fa7e0c", 00:10:30.423 "is_configured": true, 00:10:30.423 "data_offset": 2048, 00:10:30.423 "data_size": 63488 00:10:30.423 }, 00:10:30.423 { 00:10:30.423 "name": "BaseBdev2", 00:10:30.423 "uuid": "dfaaf333-51f5-5465-bccf-82afe5d49ff7", 00:10:30.423 "is_configured": true, 00:10:30.423 "data_offset": 2048, 00:10:30.423 "data_size": 63488 00:10:30.423 }, 00:10:30.423 { 00:10:30.423 "name": "BaseBdev3", 00:10:30.423 "uuid": "56e89351-6c92-5c30-a4de-ea266d5bee6d", 00:10:30.423 "is_configured": true, 00:10:30.423 "data_offset": 2048, 00:10:30.423 "data_size": 63488 00:10:30.423 } 00:10:30.423 ] 00:10:30.423 }' 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.423 14:27:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.024 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.024 14:27:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.024 [2024-11-20 14:27:31.922086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:31.961 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:31.961 14:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.961 14:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.961 14:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.961 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.961 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.962 "name": "raid_bdev1", 00:10:31.962 "uuid": "80d41caf-9f53-4026-b2b3-5bf4eb272772", 00:10:31.962 "strip_size_kb": 64, 00:10:31.962 "state": "online", 00:10:31.962 "raid_level": "concat", 00:10:31.962 "superblock": true, 00:10:31.962 "num_base_bdevs": 3, 00:10:31.962 "num_base_bdevs_discovered": 3, 00:10:31.962 "num_base_bdevs_operational": 3, 00:10:31.962 "base_bdevs_list": [ 00:10:31.962 { 00:10:31.962 "name": "BaseBdev1", 00:10:31.962 "uuid": "66492df4-b380-547b-8474-6df783fa7e0c", 00:10:31.962 "is_configured": true, 00:10:31.962 "data_offset": 2048, 00:10:31.962 "data_size": 63488 00:10:31.962 }, 00:10:31.962 { 00:10:31.962 "name": "BaseBdev2", 00:10:31.962 "uuid": "dfaaf333-51f5-5465-bccf-82afe5d49ff7", 00:10:31.962 "is_configured": true, 00:10:31.962 "data_offset": 2048, 00:10:31.962 "data_size": 63488 00:10:31.962 }, 00:10:31.962 { 00:10:31.962 "name": "BaseBdev3", 00:10:31.962 "uuid": "56e89351-6c92-5c30-a4de-ea266d5bee6d", 00:10:31.962 "is_configured": true, 00:10:31.962 "data_offset": 2048, 00:10:31.962 "data_size": 63488 00:10:31.962 } 00:10:31.962 ] 00:10:31.962 }' 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.962 14:27:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.528 [2024-11-20 14:27:33.302957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.528 [2024-11-20 14:27:33.302999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.528 [2024-11-20 14:27:33.306389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.528 [2024-11-20 14:27:33.306598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.528 [2024-11-20 14:27:33.306692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.528 [2024-11-20 14:27:33.306715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:32.528 { 00:10:32.528 "results": [ 00:10:32.528 { 00:10:32.528 "job": "raid_bdev1", 00:10:32.528 "core_mask": "0x1", 00:10:32.528 "workload": "randrw", 00:10:32.528 "percentage": 50, 00:10:32.528 "status": "finished", 00:10:32.528 "queue_depth": 1, 00:10:32.528 "io_size": 131072, 00:10:32.528 "runtime": 1.377986, 00:10:32.528 "iops": 9373.099581563238, 00:10:32.528 "mibps": 1171.6374476954047, 00:10:32.528 "io_failed": 1, 00:10:32.528 "io_timeout": 0, 00:10:32.528 "avg_latency_us": 149.58406272213503, 00:10:32.528 "min_latency_us": 43.75272727272727, 00:10:32.528 "max_latency_us": 1951.1854545454546 00:10:32.528 } 00:10:32.528 ], 00:10:32.528 "core_count": 1 00:10:32.528 } 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67384 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67384 ']' 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67384 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67384 00:10:32.528 killing process with pid 67384 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67384' 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67384 00:10:32.528 [2024-11-20 14:27:33.342405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.528 14:27:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67384 00:10:32.528 [2024-11-20 14:27:33.579148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kXHwC3p1kg 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:33.905 00:10:33.905 real 0m4.976s 00:10:33.905 user 0m6.065s 00:10:33.905 sys 0m0.631s 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.905 ************************************ 00:10:33.905 END TEST raid_write_error_test 00:10:33.905 ************************************ 00:10:33.905 14:27:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.905 14:27:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:33.905 14:27:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:33.905 14:27:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.905 14:27:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.905 14:27:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.905 ************************************ 00:10:33.905 START TEST raid_state_function_test 00:10:33.905 ************************************ 00:10:33.905 14:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:10:33.905 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:33.905 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:33.905 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:33.905 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.163 Process raid pid: 67532 00:10:34.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67532 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67532' 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67532 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67532 ']' 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.163 14:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.163 [2024-11-20 14:27:35.061604] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:10:34.163 [2024-11-20 14:27:35.062095] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.420 [2024-11-20 14:27:35.244470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.420 [2024-11-20 14:27:35.418785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.679 [2024-11-20 14:27:35.652143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.679 [2024-11-20 14:27:35.652443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.245 [2024-11-20 14:27:36.112010] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.245 [2024-11-20 14:27:36.112496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.245 [2024-11-20 14:27:36.112533] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.245 [2024-11-20 14:27:36.112558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.245 [2024-11-20 14:27:36.112578] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.245 [2024-11-20 14:27:36.112597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.245 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.245 "name": "Existed_Raid", 00:10:35.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.245 "strip_size_kb": 0, 00:10:35.245 "state": "configuring", 00:10:35.245 "raid_level": "raid1", 00:10:35.245 "superblock": false, 00:10:35.245 "num_base_bdevs": 3, 00:10:35.245 "num_base_bdevs_discovered": 0, 00:10:35.245 "num_base_bdevs_operational": 3, 00:10:35.245 "base_bdevs_list": [ 00:10:35.245 { 00:10:35.246 "name": "BaseBdev1", 00:10:35.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.246 "is_configured": false, 00:10:35.246 "data_offset": 0, 00:10:35.246 "data_size": 0 00:10:35.246 }, 00:10:35.246 { 00:10:35.246 "name": "BaseBdev2", 00:10:35.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.246 "is_configured": false, 00:10:35.246 "data_offset": 0, 00:10:35.246 "data_size": 0 00:10:35.246 }, 00:10:35.246 { 00:10:35.246 "name": "BaseBdev3", 00:10:35.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.246 "is_configured": false, 00:10:35.246 "data_offset": 0, 00:10:35.246 "data_size": 0 00:10:35.246 } 00:10:35.246 ] 00:10:35.246 }' 00:10:35.246 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.246 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 [2024-11-20 14:27:36.636111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.847 [2024-11-20 14:27:36.636185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 [2024-11-20 14:27:36.644018] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.847 [2024-11-20 14:27:36.644076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.847 [2024-11-20 14:27:36.644092] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.847 [2024-11-20 14:27:36.644108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.847 [2024-11-20 14:27:36.644117] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.847 [2024-11-20 14:27:36.644131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 [2024-11-20 14:27:36.693774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.847 BaseBdev1 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 [ 00:10:35.847 { 00:10:35.847 "name": "BaseBdev1", 00:10:35.847 "aliases": [ 00:10:35.847 "22fcce68-15fe-4b18-ab77-a10cd2c19c40" 00:10:35.847 ], 00:10:35.847 "product_name": "Malloc disk", 00:10:35.847 "block_size": 512, 00:10:35.847 "num_blocks": 65536, 00:10:35.847 "uuid": "22fcce68-15fe-4b18-ab77-a10cd2c19c40", 00:10:35.847 "assigned_rate_limits": { 00:10:35.847 "rw_ios_per_sec": 0, 00:10:35.847 "rw_mbytes_per_sec": 0, 00:10:35.847 "r_mbytes_per_sec": 0, 00:10:35.847 "w_mbytes_per_sec": 0 00:10:35.847 }, 00:10:35.847 "claimed": true, 00:10:35.847 "claim_type": "exclusive_write", 00:10:35.847 "zoned": false, 00:10:35.847 "supported_io_types": { 00:10:35.847 "read": true, 00:10:35.847 "write": true, 00:10:35.847 "unmap": true, 00:10:35.847 "flush": true, 00:10:35.847 "reset": true, 00:10:35.847 "nvme_admin": false, 00:10:35.847 "nvme_io": false, 00:10:35.847 "nvme_io_md": false, 00:10:35.847 "write_zeroes": true, 00:10:35.847 "zcopy": true, 00:10:35.847 "get_zone_info": false, 00:10:35.847 "zone_management": false, 00:10:35.847 "zone_append": false, 00:10:35.847 "compare": false, 00:10:35.847 "compare_and_write": false, 00:10:35.847 "abort": true, 00:10:35.847 "seek_hole": false, 00:10:35.847 "seek_data": false, 00:10:35.847 "copy": true, 00:10:35.847 "nvme_iov_md": false 00:10:35.847 }, 00:10:35.847 "memory_domains": [ 00:10:35.847 { 00:10:35.847 "dma_device_id": "system", 00:10:35.847 "dma_device_type": 1 00:10:35.847 }, 00:10:35.847 { 00:10:35.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.847 "dma_device_type": 2 00:10:35.847 } 00:10:35.847 ], 00:10:35.847 "driver_specific": {} 00:10:35.847 } 00:10:35.847 ] 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.847 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.848 "name": "Existed_Raid", 00:10:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.848 "strip_size_kb": 0, 00:10:35.848 "state": "configuring", 00:10:35.848 "raid_level": "raid1", 00:10:35.848 "superblock": false, 00:10:35.848 "num_base_bdevs": 3, 00:10:35.848 "num_base_bdevs_discovered": 1, 00:10:35.848 "num_base_bdevs_operational": 3, 00:10:35.848 "base_bdevs_list": [ 00:10:35.848 { 00:10:35.848 "name": "BaseBdev1", 00:10:35.848 "uuid": "22fcce68-15fe-4b18-ab77-a10cd2c19c40", 00:10:35.848 "is_configured": true, 00:10:35.848 "data_offset": 0, 00:10:35.848 "data_size": 65536 00:10:35.848 }, 00:10:35.848 { 00:10:35.848 "name": "BaseBdev2", 00:10:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.848 "is_configured": false, 00:10:35.848 "data_offset": 0, 00:10:35.848 "data_size": 0 00:10:35.848 }, 00:10:35.848 { 00:10:35.848 "name": "BaseBdev3", 00:10:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.848 "is_configured": false, 00:10:35.848 "data_offset": 0, 00:10:35.848 "data_size": 0 00:10:35.848 } 00:10:35.848 ] 00:10:35.848 }' 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.848 14:27:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.415 [2024-11-20 14:27:37.266046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.415 [2024-11-20 14:27:37.266143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.415 [2024-11-20 14:27:37.274032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.415 [2024-11-20 14:27:37.276862] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.415 [2024-11-20 14:27:37.277027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.415 [2024-11-20 14:27:37.277136] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.415 [2024-11-20 14:27:37.277253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.415 "name": "Existed_Raid", 00:10:36.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.415 "strip_size_kb": 0, 00:10:36.415 "state": "configuring", 00:10:36.415 "raid_level": "raid1", 00:10:36.415 "superblock": false, 00:10:36.415 "num_base_bdevs": 3, 00:10:36.415 "num_base_bdevs_discovered": 1, 00:10:36.415 "num_base_bdevs_operational": 3, 00:10:36.415 "base_bdevs_list": [ 00:10:36.415 { 00:10:36.415 "name": "BaseBdev1", 00:10:36.415 "uuid": "22fcce68-15fe-4b18-ab77-a10cd2c19c40", 00:10:36.415 "is_configured": true, 00:10:36.415 "data_offset": 0, 00:10:36.415 "data_size": 65536 00:10:36.415 }, 00:10:36.415 { 00:10:36.415 "name": "BaseBdev2", 00:10:36.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.415 "is_configured": false, 00:10:36.415 "data_offset": 0, 00:10:36.415 "data_size": 0 00:10:36.415 }, 00:10:36.415 { 00:10:36.415 "name": "BaseBdev3", 00:10:36.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.415 "is_configured": false, 00:10:36.415 "data_offset": 0, 00:10:36.415 "data_size": 0 00:10:36.415 } 00:10:36.415 ] 00:10:36.415 }' 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.415 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.983 [2024-11-20 14:27:37.840448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.983 BaseBdev2 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.983 [ 00:10:36.983 { 00:10:36.983 "name": "BaseBdev2", 00:10:36.983 "aliases": [ 00:10:36.983 "e33788f2-3c07-443f-a312-7add6a829b94" 00:10:36.983 ], 00:10:36.983 "product_name": "Malloc disk", 00:10:36.983 "block_size": 512, 00:10:36.983 "num_blocks": 65536, 00:10:36.983 "uuid": "e33788f2-3c07-443f-a312-7add6a829b94", 00:10:36.983 "assigned_rate_limits": { 00:10:36.983 "rw_ios_per_sec": 0, 00:10:36.983 "rw_mbytes_per_sec": 0, 00:10:36.983 "r_mbytes_per_sec": 0, 00:10:36.983 "w_mbytes_per_sec": 0 00:10:36.983 }, 00:10:36.983 "claimed": true, 00:10:36.983 "claim_type": "exclusive_write", 00:10:36.983 "zoned": false, 00:10:36.983 "supported_io_types": { 00:10:36.983 "read": true, 00:10:36.983 "write": true, 00:10:36.983 "unmap": true, 00:10:36.983 "flush": true, 00:10:36.983 "reset": true, 00:10:36.983 "nvme_admin": false, 00:10:36.983 "nvme_io": false, 00:10:36.983 "nvme_io_md": false, 00:10:36.983 "write_zeroes": true, 00:10:36.983 "zcopy": true, 00:10:36.983 "get_zone_info": false, 00:10:36.983 "zone_management": false, 00:10:36.983 "zone_append": false, 00:10:36.983 "compare": false, 00:10:36.983 "compare_and_write": false, 00:10:36.983 "abort": true, 00:10:36.983 "seek_hole": false, 00:10:36.983 "seek_data": false, 00:10:36.983 "copy": true, 00:10:36.983 "nvme_iov_md": false 00:10:36.983 }, 00:10:36.983 "memory_domains": [ 00:10:36.983 { 00:10:36.983 "dma_device_id": "system", 00:10:36.983 "dma_device_type": 1 00:10:36.983 }, 00:10:36.983 { 00:10:36.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.983 "dma_device_type": 2 00:10:36.983 } 00:10:36.983 ], 00:10:36.983 "driver_specific": {} 00:10:36.983 } 00:10:36.983 ] 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.983 "name": "Existed_Raid", 00:10:36.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.983 "strip_size_kb": 0, 00:10:36.983 "state": "configuring", 00:10:36.983 "raid_level": "raid1", 00:10:36.983 "superblock": false, 00:10:36.983 "num_base_bdevs": 3, 00:10:36.983 "num_base_bdevs_discovered": 2, 00:10:36.983 "num_base_bdevs_operational": 3, 00:10:36.983 "base_bdevs_list": [ 00:10:36.983 { 00:10:36.983 "name": "BaseBdev1", 00:10:36.983 "uuid": "22fcce68-15fe-4b18-ab77-a10cd2c19c40", 00:10:36.983 "is_configured": true, 00:10:36.983 "data_offset": 0, 00:10:36.983 "data_size": 65536 00:10:36.983 }, 00:10:36.983 { 00:10:36.983 "name": "BaseBdev2", 00:10:36.983 "uuid": "e33788f2-3c07-443f-a312-7add6a829b94", 00:10:36.983 "is_configured": true, 00:10:36.983 "data_offset": 0, 00:10:36.983 "data_size": 65536 00:10:36.983 }, 00:10:36.983 { 00:10:36.983 "name": "BaseBdev3", 00:10:36.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.983 "is_configured": false, 00:10:36.983 "data_offset": 0, 00:10:36.983 "data_size": 0 00:10:36.983 } 00:10:36.983 ] 00:10:36.983 }' 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.983 14:27:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.550 [2024-11-20 14:27:38.418790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.550 [2024-11-20 14:27:38.418888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:37.550 [2024-11-20 14:27:38.418925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:37.550 [2024-11-20 14:27:38.419367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:37.550 [2024-11-20 14:27:38.419690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:37.550 [2024-11-20 14:27:38.419712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:37.550 BaseBdev3 00:10:37.550 [2024-11-20 14:27:38.420140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.550 [ 00:10:37.550 { 00:10:37.550 "name": "BaseBdev3", 00:10:37.550 "aliases": [ 00:10:37.550 "22be29af-f5e4-49f1-afe1-16361ce96c04" 00:10:37.550 ], 00:10:37.550 "product_name": "Malloc disk", 00:10:37.550 "block_size": 512, 00:10:37.550 "num_blocks": 65536, 00:10:37.550 "uuid": "22be29af-f5e4-49f1-afe1-16361ce96c04", 00:10:37.550 "assigned_rate_limits": { 00:10:37.550 "rw_ios_per_sec": 0, 00:10:37.550 "rw_mbytes_per_sec": 0, 00:10:37.550 "r_mbytes_per_sec": 0, 00:10:37.550 "w_mbytes_per_sec": 0 00:10:37.550 }, 00:10:37.550 "claimed": true, 00:10:37.550 "claim_type": "exclusive_write", 00:10:37.550 "zoned": false, 00:10:37.550 "supported_io_types": { 00:10:37.550 "read": true, 00:10:37.550 "write": true, 00:10:37.550 "unmap": true, 00:10:37.550 "flush": true, 00:10:37.550 "reset": true, 00:10:37.550 "nvme_admin": false, 00:10:37.550 "nvme_io": false, 00:10:37.550 "nvme_io_md": false, 00:10:37.550 "write_zeroes": true, 00:10:37.550 "zcopy": true, 00:10:37.550 "get_zone_info": false, 00:10:37.550 "zone_management": false, 00:10:37.550 "zone_append": false, 00:10:37.550 "compare": false, 00:10:37.550 "compare_and_write": false, 00:10:37.550 "abort": true, 00:10:37.550 "seek_hole": false, 00:10:37.550 "seek_data": false, 00:10:37.550 "copy": true, 00:10:37.550 "nvme_iov_md": false 00:10:37.550 }, 00:10:37.550 "memory_domains": [ 00:10:37.550 { 00:10:37.550 "dma_device_id": "system", 00:10:37.550 "dma_device_type": 1 00:10:37.550 }, 00:10:37.550 { 00:10:37.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.550 "dma_device_type": 2 00:10:37.550 } 00:10:37.550 ], 00:10:37.550 "driver_specific": {} 00:10:37.550 } 00:10:37.550 ] 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.550 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.551 "name": "Existed_Raid", 00:10:37.551 "uuid": "2991159a-7125-4c11-adca-dca5cdb02cd3", 00:10:37.551 "strip_size_kb": 0, 00:10:37.551 "state": "online", 00:10:37.551 "raid_level": "raid1", 00:10:37.551 "superblock": false, 00:10:37.551 "num_base_bdevs": 3, 00:10:37.551 "num_base_bdevs_discovered": 3, 00:10:37.551 "num_base_bdevs_operational": 3, 00:10:37.551 "base_bdevs_list": [ 00:10:37.551 { 00:10:37.551 "name": "BaseBdev1", 00:10:37.551 "uuid": "22fcce68-15fe-4b18-ab77-a10cd2c19c40", 00:10:37.551 "is_configured": true, 00:10:37.551 "data_offset": 0, 00:10:37.551 "data_size": 65536 00:10:37.551 }, 00:10:37.551 { 00:10:37.551 "name": "BaseBdev2", 00:10:37.551 "uuid": "e33788f2-3c07-443f-a312-7add6a829b94", 00:10:37.551 "is_configured": true, 00:10:37.551 "data_offset": 0, 00:10:37.551 "data_size": 65536 00:10:37.551 }, 00:10:37.551 { 00:10:37.551 "name": "BaseBdev3", 00:10:37.551 "uuid": "22be29af-f5e4-49f1-afe1-16361ce96c04", 00:10:37.551 "is_configured": true, 00:10:37.551 "data_offset": 0, 00:10:37.551 "data_size": 65536 00:10:37.551 } 00:10:37.551 ] 00:10:37.551 }' 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.551 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.116 [2024-11-20 14:27:38.955734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.116 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.116 "name": "Existed_Raid", 00:10:38.116 "aliases": [ 00:10:38.116 "2991159a-7125-4c11-adca-dca5cdb02cd3" 00:10:38.116 ], 00:10:38.116 "product_name": "Raid Volume", 00:10:38.116 "block_size": 512, 00:10:38.116 "num_blocks": 65536, 00:10:38.116 "uuid": "2991159a-7125-4c11-adca-dca5cdb02cd3", 00:10:38.116 "assigned_rate_limits": { 00:10:38.116 "rw_ios_per_sec": 0, 00:10:38.116 "rw_mbytes_per_sec": 0, 00:10:38.116 "r_mbytes_per_sec": 0, 00:10:38.116 "w_mbytes_per_sec": 0 00:10:38.116 }, 00:10:38.116 "claimed": false, 00:10:38.116 "zoned": false, 00:10:38.116 "supported_io_types": { 00:10:38.116 "read": true, 00:10:38.116 "write": true, 00:10:38.116 "unmap": false, 00:10:38.116 "flush": false, 00:10:38.116 "reset": true, 00:10:38.117 "nvme_admin": false, 00:10:38.117 "nvme_io": false, 00:10:38.117 "nvme_io_md": false, 00:10:38.117 "write_zeroes": true, 00:10:38.117 "zcopy": false, 00:10:38.117 "get_zone_info": false, 00:10:38.117 "zone_management": false, 00:10:38.117 "zone_append": false, 00:10:38.117 "compare": false, 00:10:38.117 "compare_and_write": false, 00:10:38.117 "abort": false, 00:10:38.117 "seek_hole": false, 00:10:38.117 "seek_data": false, 00:10:38.117 "copy": false, 00:10:38.117 "nvme_iov_md": false 00:10:38.117 }, 00:10:38.117 "memory_domains": [ 00:10:38.117 { 00:10:38.117 "dma_device_id": "system", 00:10:38.117 "dma_device_type": 1 00:10:38.117 }, 00:10:38.117 { 00:10:38.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.117 "dma_device_type": 2 00:10:38.117 }, 00:10:38.117 { 00:10:38.117 "dma_device_id": "system", 00:10:38.117 "dma_device_type": 1 00:10:38.117 }, 00:10:38.117 { 00:10:38.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.117 "dma_device_type": 2 00:10:38.117 }, 00:10:38.117 { 00:10:38.117 "dma_device_id": "system", 00:10:38.117 "dma_device_type": 1 00:10:38.117 }, 00:10:38.117 { 00:10:38.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.117 "dma_device_type": 2 00:10:38.117 } 00:10:38.117 ], 00:10:38.117 "driver_specific": { 00:10:38.117 "raid": { 00:10:38.117 "uuid": "2991159a-7125-4c11-adca-dca5cdb02cd3", 00:10:38.117 "strip_size_kb": 0, 00:10:38.117 "state": "online", 00:10:38.117 "raid_level": "raid1", 00:10:38.117 "superblock": false, 00:10:38.117 "num_base_bdevs": 3, 00:10:38.117 "num_base_bdevs_discovered": 3, 00:10:38.117 "num_base_bdevs_operational": 3, 00:10:38.117 "base_bdevs_list": [ 00:10:38.117 { 00:10:38.117 "name": "BaseBdev1", 00:10:38.117 "uuid": "22fcce68-15fe-4b18-ab77-a10cd2c19c40", 00:10:38.117 "is_configured": true, 00:10:38.117 "data_offset": 0, 00:10:38.117 "data_size": 65536 00:10:38.117 }, 00:10:38.117 { 00:10:38.117 "name": "BaseBdev2", 00:10:38.117 "uuid": "e33788f2-3c07-443f-a312-7add6a829b94", 00:10:38.117 "is_configured": true, 00:10:38.117 "data_offset": 0, 00:10:38.117 "data_size": 65536 00:10:38.117 }, 00:10:38.117 { 00:10:38.117 "name": "BaseBdev3", 00:10:38.117 "uuid": "22be29af-f5e4-49f1-afe1-16361ce96c04", 00:10:38.117 "is_configured": true, 00:10:38.117 "data_offset": 0, 00:10:38.117 "data_size": 65536 00:10:38.117 } 00:10:38.117 ] 00:10:38.117 } 00:10:38.117 } 00:10:38.117 }' 00:10:38.117 14:27:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:38.117 BaseBdev2 00:10:38.117 BaseBdev3' 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.117 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.374 [2024-11-20 14:27:39.275452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:38.374 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.375 "name": "Existed_Raid", 00:10:38.375 "uuid": "2991159a-7125-4c11-adca-dca5cdb02cd3", 00:10:38.375 "strip_size_kb": 0, 00:10:38.375 "state": "online", 00:10:38.375 "raid_level": "raid1", 00:10:38.375 "superblock": false, 00:10:38.375 "num_base_bdevs": 3, 00:10:38.375 "num_base_bdevs_discovered": 2, 00:10:38.375 "num_base_bdevs_operational": 2, 00:10:38.375 "base_bdevs_list": [ 00:10:38.375 { 00:10:38.375 "name": null, 00:10:38.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.375 "is_configured": false, 00:10:38.375 "data_offset": 0, 00:10:38.375 "data_size": 65536 00:10:38.375 }, 00:10:38.375 { 00:10:38.375 "name": "BaseBdev2", 00:10:38.375 "uuid": "e33788f2-3c07-443f-a312-7add6a829b94", 00:10:38.375 "is_configured": true, 00:10:38.375 "data_offset": 0, 00:10:38.375 "data_size": 65536 00:10:38.375 }, 00:10:38.375 { 00:10:38.375 "name": "BaseBdev3", 00:10:38.375 "uuid": "22be29af-f5e4-49f1-afe1-16361ce96c04", 00:10:38.375 "is_configured": true, 00:10:38.375 "data_offset": 0, 00:10:38.375 "data_size": 65536 00:10:38.375 } 00:10:38.375 ] 00:10:38.375 }' 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.375 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.939 14:27:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.939 [2024-11-20 14:27:39.928260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.197 [2024-11-20 14:27:40.089577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.197 [2024-11-20 14:27:40.090032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.197 [2024-11-20 14:27:40.182056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.197 [2024-11-20 14:27:40.182456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.197 [2024-11-20 14:27:40.182491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.197 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 BaseBdev2 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 [ 00:10:39.455 { 00:10:39.455 "name": "BaseBdev2", 00:10:39.455 "aliases": [ 00:10:39.455 "7475f06d-61ac-4b68-92dd-921b02d0ed7e" 00:10:39.455 ], 00:10:39.455 "product_name": "Malloc disk", 00:10:39.455 "block_size": 512, 00:10:39.455 "num_blocks": 65536, 00:10:39.455 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:39.455 "assigned_rate_limits": { 00:10:39.455 "rw_ios_per_sec": 0, 00:10:39.455 "rw_mbytes_per_sec": 0, 00:10:39.455 "r_mbytes_per_sec": 0, 00:10:39.455 "w_mbytes_per_sec": 0 00:10:39.455 }, 00:10:39.455 "claimed": false, 00:10:39.455 "zoned": false, 00:10:39.455 "supported_io_types": { 00:10:39.455 "read": true, 00:10:39.455 "write": true, 00:10:39.455 "unmap": true, 00:10:39.455 "flush": true, 00:10:39.455 "reset": true, 00:10:39.455 "nvme_admin": false, 00:10:39.455 "nvme_io": false, 00:10:39.455 "nvme_io_md": false, 00:10:39.455 "write_zeroes": true, 00:10:39.455 "zcopy": true, 00:10:39.455 "get_zone_info": false, 00:10:39.455 "zone_management": false, 00:10:39.455 "zone_append": false, 00:10:39.455 "compare": false, 00:10:39.455 "compare_and_write": false, 00:10:39.455 "abort": true, 00:10:39.455 "seek_hole": false, 00:10:39.455 "seek_data": false, 00:10:39.455 "copy": true, 00:10:39.455 "nvme_iov_md": false 00:10:39.455 }, 00:10:39.455 "memory_domains": [ 00:10:39.455 { 00:10:39.455 "dma_device_id": "system", 00:10:39.455 "dma_device_type": 1 00:10:39.455 }, 00:10:39.455 { 00:10:39.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.455 "dma_device_type": 2 00:10:39.455 } 00:10:39.455 ], 00:10:39.455 "driver_specific": {} 00:10:39.455 } 00:10:39.455 ] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 BaseBdev3 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.455 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 [ 00:10:39.456 { 00:10:39.456 "name": "BaseBdev3", 00:10:39.456 "aliases": [ 00:10:39.456 "4e78b0d8-6574-416c-aff3-71da700f9077" 00:10:39.456 ], 00:10:39.456 "product_name": "Malloc disk", 00:10:39.456 "block_size": 512, 00:10:39.456 "num_blocks": 65536, 00:10:39.456 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:39.456 "assigned_rate_limits": { 00:10:39.456 "rw_ios_per_sec": 0, 00:10:39.456 "rw_mbytes_per_sec": 0, 00:10:39.456 "r_mbytes_per_sec": 0, 00:10:39.456 "w_mbytes_per_sec": 0 00:10:39.456 }, 00:10:39.456 "claimed": false, 00:10:39.456 "zoned": false, 00:10:39.456 "supported_io_types": { 00:10:39.456 "read": true, 00:10:39.456 "write": true, 00:10:39.456 "unmap": true, 00:10:39.456 "flush": true, 00:10:39.456 "reset": true, 00:10:39.456 "nvme_admin": false, 00:10:39.456 "nvme_io": false, 00:10:39.456 "nvme_io_md": false, 00:10:39.456 "write_zeroes": true, 00:10:39.456 "zcopy": true, 00:10:39.456 "get_zone_info": false, 00:10:39.456 "zone_management": false, 00:10:39.456 "zone_append": false, 00:10:39.456 "compare": false, 00:10:39.456 "compare_and_write": false, 00:10:39.456 "abort": true, 00:10:39.456 "seek_hole": false, 00:10:39.456 "seek_data": false, 00:10:39.456 "copy": true, 00:10:39.456 "nvme_iov_md": false 00:10:39.456 }, 00:10:39.456 "memory_domains": [ 00:10:39.456 { 00:10:39.456 "dma_device_id": "system", 00:10:39.456 "dma_device_type": 1 00:10:39.456 }, 00:10:39.456 { 00:10:39.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.456 "dma_device_type": 2 00:10:39.456 } 00:10:39.456 ], 00:10:39.456 "driver_specific": {} 00:10:39.456 } 00:10:39.456 ] 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.456 [2024-11-20 14:27:40.421283] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.456 [2024-11-20 14:27:40.421673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.456 [2024-11-20 14:27:40.421825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.456 [2024-11-20 14:27:40.424689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.456 "name": "Existed_Raid", 00:10:39.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.456 "strip_size_kb": 0, 00:10:39.456 "state": "configuring", 00:10:39.456 "raid_level": "raid1", 00:10:39.456 "superblock": false, 00:10:39.456 "num_base_bdevs": 3, 00:10:39.456 "num_base_bdevs_discovered": 2, 00:10:39.456 "num_base_bdevs_operational": 3, 00:10:39.456 "base_bdevs_list": [ 00:10:39.456 { 00:10:39.456 "name": "BaseBdev1", 00:10:39.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.456 "is_configured": false, 00:10:39.456 "data_offset": 0, 00:10:39.456 "data_size": 0 00:10:39.456 }, 00:10:39.456 { 00:10:39.456 "name": "BaseBdev2", 00:10:39.456 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:39.456 "is_configured": true, 00:10:39.456 "data_offset": 0, 00:10:39.456 "data_size": 65536 00:10:39.456 }, 00:10:39.456 { 00:10:39.456 "name": "BaseBdev3", 00:10:39.456 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:39.456 "is_configured": true, 00:10:39.456 "data_offset": 0, 00:10:39.456 "data_size": 65536 00:10:39.456 } 00:10:39.456 ] 00:10:39.456 }' 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.456 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.021 [2024-11-20 14:27:40.953467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.021 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.022 14:27:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.022 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.022 "name": "Existed_Raid", 00:10:40.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.022 "strip_size_kb": 0, 00:10:40.022 "state": "configuring", 00:10:40.022 "raid_level": "raid1", 00:10:40.022 "superblock": false, 00:10:40.022 "num_base_bdevs": 3, 00:10:40.022 "num_base_bdevs_discovered": 1, 00:10:40.022 "num_base_bdevs_operational": 3, 00:10:40.022 "base_bdevs_list": [ 00:10:40.022 { 00:10:40.022 "name": "BaseBdev1", 00:10:40.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.022 "is_configured": false, 00:10:40.022 "data_offset": 0, 00:10:40.022 "data_size": 0 00:10:40.022 }, 00:10:40.022 { 00:10:40.022 "name": null, 00:10:40.022 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:40.022 "is_configured": false, 00:10:40.022 "data_offset": 0, 00:10:40.022 "data_size": 65536 00:10:40.022 }, 00:10:40.022 { 00:10:40.022 "name": "BaseBdev3", 00:10:40.022 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:40.022 "is_configured": true, 00:10:40.022 "data_offset": 0, 00:10:40.022 "data_size": 65536 00:10:40.022 } 00:10:40.022 ] 00:10:40.022 }' 00:10:40.022 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.022 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.589 [2024-11-20 14:27:41.520000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.589 BaseBdev1 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.589 [ 00:10:40.589 { 00:10:40.589 "name": "BaseBdev1", 00:10:40.589 "aliases": [ 00:10:40.589 "37df7b00-ed7c-4527-8ce9-3b7a3d25c207" 00:10:40.589 ], 00:10:40.589 "product_name": "Malloc disk", 00:10:40.589 "block_size": 512, 00:10:40.589 "num_blocks": 65536, 00:10:40.589 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:40.589 "assigned_rate_limits": { 00:10:40.589 "rw_ios_per_sec": 0, 00:10:40.589 "rw_mbytes_per_sec": 0, 00:10:40.589 "r_mbytes_per_sec": 0, 00:10:40.589 "w_mbytes_per_sec": 0 00:10:40.589 }, 00:10:40.589 "claimed": true, 00:10:40.589 "claim_type": "exclusive_write", 00:10:40.589 "zoned": false, 00:10:40.589 "supported_io_types": { 00:10:40.589 "read": true, 00:10:40.589 "write": true, 00:10:40.589 "unmap": true, 00:10:40.589 "flush": true, 00:10:40.589 "reset": true, 00:10:40.589 "nvme_admin": false, 00:10:40.589 "nvme_io": false, 00:10:40.589 "nvme_io_md": false, 00:10:40.589 "write_zeroes": true, 00:10:40.589 "zcopy": true, 00:10:40.589 "get_zone_info": false, 00:10:40.589 "zone_management": false, 00:10:40.589 "zone_append": false, 00:10:40.589 "compare": false, 00:10:40.589 "compare_and_write": false, 00:10:40.589 "abort": true, 00:10:40.589 "seek_hole": false, 00:10:40.589 "seek_data": false, 00:10:40.589 "copy": true, 00:10:40.589 "nvme_iov_md": false 00:10:40.589 }, 00:10:40.589 "memory_domains": [ 00:10:40.589 { 00:10:40.589 "dma_device_id": "system", 00:10:40.589 "dma_device_type": 1 00:10:40.589 }, 00:10:40.589 { 00:10:40.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.589 "dma_device_type": 2 00:10:40.589 } 00:10:40.589 ], 00:10:40.589 "driver_specific": {} 00:10:40.589 } 00:10:40.589 ] 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.589 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.590 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.590 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.590 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.590 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.590 "name": "Existed_Raid", 00:10:40.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.590 "strip_size_kb": 0, 00:10:40.590 "state": "configuring", 00:10:40.590 "raid_level": "raid1", 00:10:40.590 "superblock": false, 00:10:40.590 "num_base_bdevs": 3, 00:10:40.590 "num_base_bdevs_discovered": 2, 00:10:40.590 "num_base_bdevs_operational": 3, 00:10:40.590 "base_bdevs_list": [ 00:10:40.590 { 00:10:40.590 "name": "BaseBdev1", 00:10:40.590 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:40.590 "is_configured": true, 00:10:40.590 "data_offset": 0, 00:10:40.590 "data_size": 65536 00:10:40.590 }, 00:10:40.590 { 00:10:40.590 "name": null, 00:10:40.590 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:40.590 "is_configured": false, 00:10:40.590 "data_offset": 0, 00:10:40.590 "data_size": 65536 00:10:40.590 }, 00:10:40.590 { 00:10:40.590 "name": "BaseBdev3", 00:10:40.590 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:40.590 "is_configured": true, 00:10:40.590 "data_offset": 0, 00:10:40.590 "data_size": 65536 00:10:40.590 } 00:10:40.590 ] 00:10:40.590 }' 00:10:40.590 14:27:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.590 14:27:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.192 [2024-11-20 14:27:42.124206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.192 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.193 "name": "Existed_Raid", 00:10:41.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.193 "strip_size_kb": 0, 00:10:41.193 "state": "configuring", 00:10:41.193 "raid_level": "raid1", 00:10:41.193 "superblock": false, 00:10:41.193 "num_base_bdevs": 3, 00:10:41.193 "num_base_bdevs_discovered": 1, 00:10:41.193 "num_base_bdevs_operational": 3, 00:10:41.193 "base_bdevs_list": [ 00:10:41.193 { 00:10:41.193 "name": "BaseBdev1", 00:10:41.193 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:41.193 "is_configured": true, 00:10:41.193 "data_offset": 0, 00:10:41.193 "data_size": 65536 00:10:41.193 }, 00:10:41.193 { 00:10:41.193 "name": null, 00:10:41.193 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:41.193 "is_configured": false, 00:10:41.193 "data_offset": 0, 00:10:41.193 "data_size": 65536 00:10:41.193 }, 00:10:41.193 { 00:10:41.193 "name": null, 00:10:41.193 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:41.193 "is_configured": false, 00:10:41.193 "data_offset": 0, 00:10:41.193 "data_size": 65536 00:10:41.193 } 00:10:41.193 ] 00:10:41.193 }' 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.193 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.761 [2024-11-20 14:27:42.668439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.761 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.761 "name": "Existed_Raid", 00:10:41.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.761 "strip_size_kb": 0, 00:10:41.761 "state": "configuring", 00:10:41.761 "raid_level": "raid1", 00:10:41.761 "superblock": false, 00:10:41.761 "num_base_bdevs": 3, 00:10:41.761 "num_base_bdevs_discovered": 2, 00:10:41.761 "num_base_bdevs_operational": 3, 00:10:41.761 "base_bdevs_list": [ 00:10:41.761 { 00:10:41.761 "name": "BaseBdev1", 00:10:41.761 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:41.761 "is_configured": true, 00:10:41.761 "data_offset": 0, 00:10:41.761 "data_size": 65536 00:10:41.761 }, 00:10:41.761 { 00:10:41.761 "name": null, 00:10:41.761 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:41.761 "is_configured": false, 00:10:41.761 "data_offset": 0, 00:10:41.761 "data_size": 65536 00:10:41.761 }, 00:10:41.761 { 00:10:41.761 "name": "BaseBdev3", 00:10:41.761 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:41.761 "is_configured": true, 00:10:41.761 "data_offset": 0, 00:10:41.761 "data_size": 65536 00:10:41.762 } 00:10:41.762 ] 00:10:41.762 }' 00:10:41.762 14:27:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.762 14:27:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.327 [2024-11-20 14:27:43.216559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.327 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.328 "name": "Existed_Raid", 00:10:42.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.328 "strip_size_kb": 0, 00:10:42.328 "state": "configuring", 00:10:42.328 "raid_level": "raid1", 00:10:42.328 "superblock": false, 00:10:42.328 "num_base_bdevs": 3, 00:10:42.328 "num_base_bdevs_discovered": 1, 00:10:42.328 "num_base_bdevs_operational": 3, 00:10:42.328 "base_bdevs_list": [ 00:10:42.328 { 00:10:42.328 "name": null, 00:10:42.328 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:42.328 "is_configured": false, 00:10:42.328 "data_offset": 0, 00:10:42.328 "data_size": 65536 00:10:42.328 }, 00:10:42.328 { 00:10:42.328 "name": null, 00:10:42.328 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:42.328 "is_configured": false, 00:10:42.328 "data_offset": 0, 00:10:42.328 "data_size": 65536 00:10:42.328 }, 00:10:42.328 { 00:10:42.328 "name": "BaseBdev3", 00:10:42.328 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:42.328 "is_configured": true, 00:10:42.328 "data_offset": 0, 00:10:42.328 "data_size": 65536 00:10:42.328 } 00:10:42.328 ] 00:10:42.328 }' 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.328 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.893 [2024-11-20 14:27:43.859377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.893 "name": "Existed_Raid", 00:10:42.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.893 "strip_size_kb": 0, 00:10:42.893 "state": "configuring", 00:10:42.893 "raid_level": "raid1", 00:10:42.893 "superblock": false, 00:10:42.893 "num_base_bdevs": 3, 00:10:42.893 "num_base_bdevs_discovered": 2, 00:10:42.893 "num_base_bdevs_operational": 3, 00:10:42.893 "base_bdevs_list": [ 00:10:42.893 { 00:10:42.893 "name": null, 00:10:42.893 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:42.893 "is_configured": false, 00:10:42.893 "data_offset": 0, 00:10:42.893 "data_size": 65536 00:10:42.893 }, 00:10:42.893 { 00:10:42.893 "name": "BaseBdev2", 00:10:42.893 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:42.893 "is_configured": true, 00:10:42.893 "data_offset": 0, 00:10:42.893 "data_size": 65536 00:10:42.893 }, 00:10:42.893 { 00:10:42.893 "name": "BaseBdev3", 00:10:42.893 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:42.893 "is_configured": true, 00:10:42.893 "data_offset": 0, 00:10:42.893 "data_size": 65536 00:10:42.893 } 00:10:42.893 ] 00:10:42.893 }' 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.893 14:27:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 37df7b00-ed7c-4527-8ce9-3b7a3d25c207 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.458 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.716 [2024-11-20 14:27:44.542014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:43.716 [2024-11-20 14:27:44.542128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:43.716 [2024-11-20 14:27:44.542142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:43.716 [2024-11-20 14:27:44.542478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:43.716 [2024-11-20 14:27:44.542705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:43.716 [2024-11-20 14:27:44.542727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:43.716 [2024-11-20 14:27:44.543049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.716 NewBaseBdev 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:43.716 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.717 [ 00:10:43.717 { 00:10:43.717 "name": "NewBaseBdev", 00:10:43.717 "aliases": [ 00:10:43.717 "37df7b00-ed7c-4527-8ce9-3b7a3d25c207" 00:10:43.717 ], 00:10:43.717 "product_name": "Malloc disk", 00:10:43.717 "block_size": 512, 00:10:43.717 "num_blocks": 65536, 00:10:43.717 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:43.717 "assigned_rate_limits": { 00:10:43.717 "rw_ios_per_sec": 0, 00:10:43.717 "rw_mbytes_per_sec": 0, 00:10:43.717 "r_mbytes_per_sec": 0, 00:10:43.717 "w_mbytes_per_sec": 0 00:10:43.717 }, 00:10:43.717 "claimed": true, 00:10:43.717 "claim_type": "exclusive_write", 00:10:43.717 "zoned": false, 00:10:43.717 "supported_io_types": { 00:10:43.717 "read": true, 00:10:43.717 "write": true, 00:10:43.717 "unmap": true, 00:10:43.717 "flush": true, 00:10:43.717 "reset": true, 00:10:43.717 "nvme_admin": false, 00:10:43.717 "nvme_io": false, 00:10:43.717 "nvme_io_md": false, 00:10:43.717 "write_zeroes": true, 00:10:43.717 "zcopy": true, 00:10:43.717 "get_zone_info": false, 00:10:43.717 "zone_management": false, 00:10:43.717 "zone_append": false, 00:10:43.717 "compare": false, 00:10:43.717 "compare_and_write": false, 00:10:43.717 "abort": true, 00:10:43.717 "seek_hole": false, 00:10:43.717 "seek_data": false, 00:10:43.717 "copy": true, 00:10:43.717 "nvme_iov_md": false 00:10:43.717 }, 00:10:43.717 "memory_domains": [ 00:10:43.717 { 00:10:43.717 "dma_device_id": "system", 00:10:43.717 "dma_device_type": 1 00:10:43.717 }, 00:10:43.717 { 00:10:43.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.717 "dma_device_type": 2 00:10:43.717 } 00:10:43.717 ], 00:10:43.717 "driver_specific": {} 00:10:43.717 } 00:10:43.717 ] 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.717 "name": "Existed_Raid", 00:10:43.717 "uuid": "d93402ab-8e62-48f3-99b6-0ce747754d07", 00:10:43.717 "strip_size_kb": 0, 00:10:43.717 "state": "online", 00:10:43.717 "raid_level": "raid1", 00:10:43.717 "superblock": false, 00:10:43.717 "num_base_bdevs": 3, 00:10:43.717 "num_base_bdevs_discovered": 3, 00:10:43.717 "num_base_bdevs_operational": 3, 00:10:43.717 "base_bdevs_list": [ 00:10:43.717 { 00:10:43.717 "name": "NewBaseBdev", 00:10:43.717 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:43.717 "is_configured": true, 00:10:43.717 "data_offset": 0, 00:10:43.717 "data_size": 65536 00:10:43.717 }, 00:10:43.717 { 00:10:43.717 "name": "BaseBdev2", 00:10:43.717 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:43.717 "is_configured": true, 00:10:43.717 "data_offset": 0, 00:10:43.717 "data_size": 65536 00:10:43.717 }, 00:10:43.717 { 00:10:43.717 "name": "BaseBdev3", 00:10:43.717 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:43.717 "is_configured": true, 00:10:43.717 "data_offset": 0, 00:10:43.717 "data_size": 65536 00:10:43.717 } 00:10:43.717 ] 00:10:43.717 }' 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.717 14:27:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.362 [2024-11-20 14:27:45.106583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.362 "name": "Existed_Raid", 00:10:44.362 "aliases": [ 00:10:44.362 "d93402ab-8e62-48f3-99b6-0ce747754d07" 00:10:44.362 ], 00:10:44.362 "product_name": "Raid Volume", 00:10:44.362 "block_size": 512, 00:10:44.362 "num_blocks": 65536, 00:10:44.362 "uuid": "d93402ab-8e62-48f3-99b6-0ce747754d07", 00:10:44.362 "assigned_rate_limits": { 00:10:44.362 "rw_ios_per_sec": 0, 00:10:44.362 "rw_mbytes_per_sec": 0, 00:10:44.362 "r_mbytes_per_sec": 0, 00:10:44.362 "w_mbytes_per_sec": 0 00:10:44.362 }, 00:10:44.362 "claimed": false, 00:10:44.362 "zoned": false, 00:10:44.362 "supported_io_types": { 00:10:44.362 "read": true, 00:10:44.362 "write": true, 00:10:44.362 "unmap": false, 00:10:44.362 "flush": false, 00:10:44.362 "reset": true, 00:10:44.362 "nvme_admin": false, 00:10:44.362 "nvme_io": false, 00:10:44.362 "nvme_io_md": false, 00:10:44.362 "write_zeroes": true, 00:10:44.362 "zcopy": false, 00:10:44.362 "get_zone_info": false, 00:10:44.362 "zone_management": false, 00:10:44.362 "zone_append": false, 00:10:44.362 "compare": false, 00:10:44.362 "compare_and_write": false, 00:10:44.362 "abort": false, 00:10:44.362 "seek_hole": false, 00:10:44.362 "seek_data": false, 00:10:44.362 "copy": false, 00:10:44.362 "nvme_iov_md": false 00:10:44.362 }, 00:10:44.362 "memory_domains": [ 00:10:44.362 { 00:10:44.362 "dma_device_id": "system", 00:10:44.362 "dma_device_type": 1 00:10:44.362 }, 00:10:44.362 { 00:10:44.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.362 "dma_device_type": 2 00:10:44.362 }, 00:10:44.362 { 00:10:44.362 "dma_device_id": "system", 00:10:44.362 "dma_device_type": 1 00:10:44.362 }, 00:10:44.362 { 00:10:44.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.362 "dma_device_type": 2 00:10:44.362 }, 00:10:44.362 { 00:10:44.362 "dma_device_id": "system", 00:10:44.362 "dma_device_type": 1 00:10:44.362 }, 00:10:44.362 { 00:10:44.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.362 "dma_device_type": 2 00:10:44.362 } 00:10:44.362 ], 00:10:44.362 "driver_specific": { 00:10:44.362 "raid": { 00:10:44.362 "uuid": "d93402ab-8e62-48f3-99b6-0ce747754d07", 00:10:44.362 "strip_size_kb": 0, 00:10:44.362 "state": "online", 00:10:44.362 "raid_level": "raid1", 00:10:44.362 "superblock": false, 00:10:44.362 "num_base_bdevs": 3, 00:10:44.362 "num_base_bdevs_discovered": 3, 00:10:44.362 "num_base_bdevs_operational": 3, 00:10:44.362 "base_bdevs_list": [ 00:10:44.362 { 00:10:44.362 "name": "NewBaseBdev", 00:10:44.362 "uuid": "37df7b00-ed7c-4527-8ce9-3b7a3d25c207", 00:10:44.362 "is_configured": true, 00:10:44.362 "data_offset": 0, 00:10:44.362 "data_size": 65536 00:10:44.362 }, 00:10:44.362 { 00:10:44.362 "name": "BaseBdev2", 00:10:44.362 "uuid": "7475f06d-61ac-4b68-92dd-921b02d0ed7e", 00:10:44.362 "is_configured": true, 00:10:44.362 "data_offset": 0, 00:10:44.362 "data_size": 65536 00:10:44.362 }, 00:10:44.362 { 00:10:44.362 "name": "BaseBdev3", 00:10:44.362 "uuid": "4e78b0d8-6574-416c-aff3-71da700f9077", 00:10:44.362 "is_configured": true, 00:10:44.362 "data_offset": 0, 00:10:44.362 "data_size": 65536 00:10:44.362 } 00:10:44.362 ] 00:10:44.362 } 00:10:44.362 } 00:10:44.362 }' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:44.362 BaseBdev2 00:10:44.362 BaseBdev3' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.362 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.363 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.621 [2024-11-20 14:27:45.454333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.621 [2024-11-20 14:27:45.454653] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.621 [2024-11-20 14:27:45.454804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.621 [2024-11-20 14:27:45.455228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.621 [2024-11-20 14:27:45.455247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67532 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67532 ']' 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67532 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67532 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67532' 00:10:44.621 killing process with pid 67532 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67532 00:10:44.621 14:27:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67532 00:10:44.621 [2024-11-20 14:27:45.495352] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.879 [2024-11-20 14:27:45.830358] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.255 14:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:46.255 00:10:46.255 real 0m12.010s 00:10:46.255 user 0m19.592s 00:10:46.255 sys 0m1.745s 00:10:46.255 ************************************ 00:10:46.255 END TEST raid_state_function_test 00:10:46.255 ************************************ 00:10:46.255 14:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.255 14:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.255 14:27:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:46.255 14:27:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:46.255 14:27:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.255 14:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.255 ************************************ 00:10:46.255 START TEST raid_state_function_test_sb 00:10:46.255 ************************************ 00:10:46.255 14:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:46.256 Process raid pid: 68166 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68166 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68166' 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68166 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68166 ']' 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.256 14:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.256 [2024-11-20 14:27:47.144747] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:10:46.256 [2024-11-20 14:27:47.145120] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.514 [2024-11-20 14:27:47.338075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.514 [2024-11-20 14:27:47.513782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.772 [2024-11-20 14:27:47.727856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.772 [2024-11-20 14:27:47.728156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.337 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.337 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.338 [2024-11-20 14:27:48.234657] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.338 [2024-11-20 14:27:48.234733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.338 [2024-11-20 14:27:48.234752] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.338 [2024-11-20 14:27:48.234769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.338 [2024-11-20 14:27:48.234780] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.338 [2024-11-20 14:27:48.234796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.338 "name": "Existed_Raid", 00:10:47.338 "uuid": "c8e1eaf5-6e31-400e-a77b-a2c478770a5a", 00:10:47.338 "strip_size_kb": 0, 00:10:47.338 "state": "configuring", 00:10:47.338 "raid_level": "raid1", 00:10:47.338 "superblock": true, 00:10:47.338 "num_base_bdevs": 3, 00:10:47.338 "num_base_bdevs_discovered": 0, 00:10:47.338 "num_base_bdevs_operational": 3, 00:10:47.338 "base_bdevs_list": [ 00:10:47.338 { 00:10:47.338 "name": "BaseBdev1", 00:10:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.338 "is_configured": false, 00:10:47.338 "data_offset": 0, 00:10:47.338 "data_size": 0 00:10:47.338 }, 00:10:47.338 { 00:10:47.338 "name": "BaseBdev2", 00:10:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.338 "is_configured": false, 00:10:47.338 "data_offset": 0, 00:10:47.338 "data_size": 0 00:10:47.338 }, 00:10:47.338 { 00:10:47.338 "name": "BaseBdev3", 00:10:47.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.338 "is_configured": false, 00:10:47.338 "data_offset": 0, 00:10:47.338 "data_size": 0 00:10:47.338 } 00:10:47.338 ] 00:10:47.338 }' 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.338 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.905 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.905 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.905 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.905 [2024-11-20 14:27:48.762712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.905 [2024-11-20 14:27:48.762762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:47.905 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.905 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.905 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.905 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.906 [2024-11-20 14:27:48.770682] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.906 [2024-11-20 14:27:48.770855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.906 [2024-11-20 14:27:48.770882] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.906 [2024-11-20 14:27:48.770901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.906 [2024-11-20 14:27:48.770911] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.906 [2024-11-20 14:27:48.770925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.906 [2024-11-20 14:27:48.815993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.906 BaseBdev1 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.906 [ 00:10:47.906 { 00:10:47.906 "name": "BaseBdev1", 00:10:47.906 "aliases": [ 00:10:47.906 "7775df4e-7e72-4c42-8c32-51d7e4978b9c" 00:10:47.906 ], 00:10:47.906 "product_name": "Malloc disk", 00:10:47.906 "block_size": 512, 00:10:47.906 "num_blocks": 65536, 00:10:47.906 "uuid": "7775df4e-7e72-4c42-8c32-51d7e4978b9c", 00:10:47.906 "assigned_rate_limits": { 00:10:47.906 "rw_ios_per_sec": 0, 00:10:47.906 "rw_mbytes_per_sec": 0, 00:10:47.906 "r_mbytes_per_sec": 0, 00:10:47.906 "w_mbytes_per_sec": 0 00:10:47.906 }, 00:10:47.906 "claimed": true, 00:10:47.906 "claim_type": "exclusive_write", 00:10:47.906 "zoned": false, 00:10:47.906 "supported_io_types": { 00:10:47.906 "read": true, 00:10:47.906 "write": true, 00:10:47.906 "unmap": true, 00:10:47.906 "flush": true, 00:10:47.906 "reset": true, 00:10:47.906 "nvme_admin": false, 00:10:47.906 "nvme_io": false, 00:10:47.906 "nvme_io_md": false, 00:10:47.906 "write_zeroes": true, 00:10:47.906 "zcopy": true, 00:10:47.906 "get_zone_info": false, 00:10:47.906 "zone_management": false, 00:10:47.906 "zone_append": false, 00:10:47.906 "compare": false, 00:10:47.906 "compare_and_write": false, 00:10:47.906 "abort": true, 00:10:47.906 "seek_hole": false, 00:10:47.906 "seek_data": false, 00:10:47.906 "copy": true, 00:10:47.906 "nvme_iov_md": false 00:10:47.906 }, 00:10:47.906 "memory_domains": [ 00:10:47.906 { 00:10:47.906 "dma_device_id": "system", 00:10:47.906 "dma_device_type": 1 00:10:47.906 }, 00:10:47.906 { 00:10:47.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.906 "dma_device_type": 2 00:10:47.906 } 00:10:47.906 ], 00:10:47.906 "driver_specific": {} 00:10:47.906 } 00:10:47.906 ] 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.906 "name": "Existed_Raid", 00:10:47.906 "uuid": "092379cb-4936-4df5-a228-d23a95577574", 00:10:47.906 "strip_size_kb": 0, 00:10:47.906 "state": "configuring", 00:10:47.906 "raid_level": "raid1", 00:10:47.906 "superblock": true, 00:10:47.906 "num_base_bdevs": 3, 00:10:47.906 "num_base_bdevs_discovered": 1, 00:10:47.906 "num_base_bdevs_operational": 3, 00:10:47.906 "base_bdevs_list": [ 00:10:47.906 { 00:10:47.906 "name": "BaseBdev1", 00:10:47.906 "uuid": "7775df4e-7e72-4c42-8c32-51d7e4978b9c", 00:10:47.906 "is_configured": true, 00:10:47.906 "data_offset": 2048, 00:10:47.906 "data_size": 63488 00:10:47.906 }, 00:10:47.906 { 00:10:47.906 "name": "BaseBdev2", 00:10:47.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.906 "is_configured": false, 00:10:47.906 "data_offset": 0, 00:10:47.906 "data_size": 0 00:10:47.906 }, 00:10:47.906 { 00:10:47.906 "name": "BaseBdev3", 00:10:47.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.906 "is_configured": false, 00:10:47.906 "data_offset": 0, 00:10:47.906 "data_size": 0 00:10:47.906 } 00:10:47.906 ] 00:10:47.906 }' 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.906 14:27:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 [2024-11-20 14:27:49.368201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.472 [2024-11-20 14:27:49.368271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 [2024-11-20 14:27:49.376281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.472 [2024-11-20 14:27:49.379011] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.472 [2024-11-20 14:27:49.379199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.472 [2024-11-20 14:27:49.379320] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.472 [2024-11-20 14:27:49.379380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.472 "name": "Existed_Raid", 00:10:48.472 "uuid": "d4e96654-d9ab-42c2-8642-a5bc162d0ef4", 00:10:48.472 "strip_size_kb": 0, 00:10:48.472 "state": "configuring", 00:10:48.472 "raid_level": "raid1", 00:10:48.472 "superblock": true, 00:10:48.472 "num_base_bdevs": 3, 00:10:48.472 "num_base_bdevs_discovered": 1, 00:10:48.472 "num_base_bdevs_operational": 3, 00:10:48.472 "base_bdevs_list": [ 00:10:48.472 { 00:10:48.472 "name": "BaseBdev1", 00:10:48.472 "uuid": "7775df4e-7e72-4c42-8c32-51d7e4978b9c", 00:10:48.472 "is_configured": true, 00:10:48.472 "data_offset": 2048, 00:10:48.472 "data_size": 63488 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": "BaseBdev2", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "is_configured": false, 00:10:48.472 "data_offset": 0, 00:10:48.472 "data_size": 0 00:10:48.472 }, 00:10:48.472 { 00:10:48.472 "name": "BaseBdev3", 00:10:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.472 "is_configured": false, 00:10:48.472 "data_offset": 0, 00:10:48.472 "data_size": 0 00:10:48.472 } 00:10:48.472 ] 00:10:48.472 }' 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.472 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.038 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.038 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.038 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.038 BaseBdev2 00:10:49.038 [2024-11-20 14:27:49.955010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.038 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.039 [ 00:10:49.039 { 00:10:49.039 "name": "BaseBdev2", 00:10:49.039 "aliases": [ 00:10:49.039 "edc044c5-2aea-42e2-af2c-58ffd2f00264" 00:10:49.039 ], 00:10:49.039 "product_name": "Malloc disk", 00:10:49.039 "block_size": 512, 00:10:49.039 "num_blocks": 65536, 00:10:49.039 "uuid": "edc044c5-2aea-42e2-af2c-58ffd2f00264", 00:10:49.039 "assigned_rate_limits": { 00:10:49.039 "rw_ios_per_sec": 0, 00:10:49.039 "rw_mbytes_per_sec": 0, 00:10:49.039 "r_mbytes_per_sec": 0, 00:10:49.039 "w_mbytes_per_sec": 0 00:10:49.039 }, 00:10:49.039 "claimed": true, 00:10:49.039 "claim_type": "exclusive_write", 00:10:49.039 "zoned": false, 00:10:49.039 "supported_io_types": { 00:10:49.039 "read": true, 00:10:49.039 "write": true, 00:10:49.039 "unmap": true, 00:10:49.039 "flush": true, 00:10:49.039 "reset": true, 00:10:49.039 "nvme_admin": false, 00:10:49.039 "nvme_io": false, 00:10:49.039 "nvme_io_md": false, 00:10:49.039 "write_zeroes": true, 00:10:49.039 "zcopy": true, 00:10:49.039 "get_zone_info": false, 00:10:49.039 "zone_management": false, 00:10:49.039 "zone_append": false, 00:10:49.039 "compare": false, 00:10:49.039 "compare_and_write": false, 00:10:49.039 "abort": true, 00:10:49.039 "seek_hole": false, 00:10:49.039 "seek_data": false, 00:10:49.039 "copy": true, 00:10:49.039 "nvme_iov_md": false 00:10:49.039 }, 00:10:49.039 "memory_domains": [ 00:10:49.039 { 00:10:49.039 "dma_device_id": "system", 00:10:49.039 "dma_device_type": 1 00:10:49.039 }, 00:10:49.039 { 00:10:49.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.039 "dma_device_type": 2 00:10:49.039 } 00:10:49.039 ], 00:10:49.039 "driver_specific": {} 00:10:49.039 } 00:10:49.039 ] 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.039 14:27:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.039 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.039 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.039 "name": "Existed_Raid", 00:10:49.039 "uuid": "d4e96654-d9ab-42c2-8642-a5bc162d0ef4", 00:10:49.039 "strip_size_kb": 0, 00:10:49.039 "state": "configuring", 00:10:49.039 "raid_level": "raid1", 00:10:49.039 "superblock": true, 00:10:49.039 "num_base_bdevs": 3, 00:10:49.039 "num_base_bdevs_discovered": 2, 00:10:49.039 "num_base_bdevs_operational": 3, 00:10:49.039 "base_bdevs_list": [ 00:10:49.039 { 00:10:49.039 "name": "BaseBdev1", 00:10:49.039 "uuid": "7775df4e-7e72-4c42-8c32-51d7e4978b9c", 00:10:49.039 "is_configured": true, 00:10:49.039 "data_offset": 2048, 00:10:49.039 "data_size": 63488 00:10:49.039 }, 00:10:49.039 { 00:10:49.039 "name": "BaseBdev2", 00:10:49.039 "uuid": "edc044c5-2aea-42e2-af2c-58ffd2f00264", 00:10:49.039 "is_configured": true, 00:10:49.039 "data_offset": 2048, 00:10:49.039 "data_size": 63488 00:10:49.039 }, 00:10:49.039 { 00:10:49.039 "name": "BaseBdev3", 00:10:49.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.039 "is_configured": false, 00:10:49.039 "data_offset": 0, 00:10:49.039 "data_size": 0 00:10:49.039 } 00:10:49.039 ] 00:10:49.039 }' 00:10:49.039 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.039 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.605 [2024-11-20 14:27:50.538959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.605 BaseBdev3 00:10:49.605 [2024-11-20 14:27:50.539686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.605 [2024-11-20 14:27:50.539730] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:49.605 [2024-11-20 14:27:50.540168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:49.605 [2024-11-20 14:27:50.540438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.605 [2024-11-20 14:27:50.540459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.605 [2024-11-20 14:27:50.540767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.605 [ 00:10:49.605 { 00:10:49.605 "name": "BaseBdev3", 00:10:49.605 "aliases": [ 00:10:49.605 "9c25aaa8-25b1-49bf-af06-d86eee1cfb8d" 00:10:49.605 ], 00:10:49.605 "product_name": "Malloc disk", 00:10:49.605 "block_size": 512, 00:10:49.605 "num_blocks": 65536, 00:10:49.605 "uuid": "9c25aaa8-25b1-49bf-af06-d86eee1cfb8d", 00:10:49.605 "assigned_rate_limits": { 00:10:49.605 "rw_ios_per_sec": 0, 00:10:49.605 "rw_mbytes_per_sec": 0, 00:10:49.605 "r_mbytes_per_sec": 0, 00:10:49.605 "w_mbytes_per_sec": 0 00:10:49.605 }, 00:10:49.605 "claimed": true, 00:10:49.605 "claim_type": "exclusive_write", 00:10:49.605 "zoned": false, 00:10:49.605 "supported_io_types": { 00:10:49.605 "read": true, 00:10:49.605 "write": true, 00:10:49.605 "unmap": true, 00:10:49.605 "flush": true, 00:10:49.605 "reset": true, 00:10:49.605 "nvme_admin": false, 00:10:49.605 "nvme_io": false, 00:10:49.605 "nvme_io_md": false, 00:10:49.605 "write_zeroes": true, 00:10:49.605 "zcopy": true, 00:10:49.605 "get_zone_info": false, 00:10:49.605 "zone_management": false, 00:10:49.605 "zone_append": false, 00:10:49.605 "compare": false, 00:10:49.605 "compare_and_write": false, 00:10:49.605 "abort": true, 00:10:49.605 "seek_hole": false, 00:10:49.605 "seek_data": false, 00:10:49.605 "copy": true, 00:10:49.605 "nvme_iov_md": false 00:10:49.605 }, 00:10:49.605 "memory_domains": [ 00:10:49.605 { 00:10:49.605 "dma_device_id": "system", 00:10:49.605 "dma_device_type": 1 00:10:49.605 }, 00:10:49.605 { 00:10:49.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.605 "dma_device_type": 2 00:10:49.605 } 00:10:49.605 ], 00:10:49.605 "driver_specific": {} 00:10:49.605 } 00:10:49.605 ] 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.605 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.606 "name": "Existed_Raid", 00:10:49.606 "uuid": "d4e96654-d9ab-42c2-8642-a5bc162d0ef4", 00:10:49.606 "strip_size_kb": 0, 00:10:49.606 "state": "online", 00:10:49.606 "raid_level": "raid1", 00:10:49.606 "superblock": true, 00:10:49.606 "num_base_bdevs": 3, 00:10:49.606 "num_base_bdevs_discovered": 3, 00:10:49.606 "num_base_bdevs_operational": 3, 00:10:49.606 "base_bdevs_list": [ 00:10:49.606 { 00:10:49.606 "name": "BaseBdev1", 00:10:49.606 "uuid": "7775df4e-7e72-4c42-8c32-51d7e4978b9c", 00:10:49.606 "is_configured": true, 00:10:49.606 "data_offset": 2048, 00:10:49.606 "data_size": 63488 00:10:49.606 }, 00:10:49.606 { 00:10:49.606 "name": "BaseBdev2", 00:10:49.606 "uuid": "edc044c5-2aea-42e2-af2c-58ffd2f00264", 00:10:49.606 "is_configured": true, 00:10:49.606 "data_offset": 2048, 00:10:49.606 "data_size": 63488 00:10:49.606 }, 00:10:49.606 { 00:10:49.606 "name": "BaseBdev3", 00:10:49.606 "uuid": "9c25aaa8-25b1-49bf-af06-d86eee1cfb8d", 00:10:49.606 "is_configured": true, 00:10:49.606 "data_offset": 2048, 00:10:49.606 "data_size": 63488 00:10:49.606 } 00:10:49.606 ] 00:10:49.606 }' 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.606 14:27:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.173 [2024-11-20 14:27:51.091609] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.173 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.173 "name": "Existed_Raid", 00:10:50.173 "aliases": [ 00:10:50.173 "d4e96654-d9ab-42c2-8642-a5bc162d0ef4" 00:10:50.173 ], 00:10:50.173 "product_name": "Raid Volume", 00:10:50.173 "block_size": 512, 00:10:50.173 "num_blocks": 63488, 00:10:50.173 "uuid": "d4e96654-d9ab-42c2-8642-a5bc162d0ef4", 00:10:50.173 "assigned_rate_limits": { 00:10:50.173 "rw_ios_per_sec": 0, 00:10:50.173 "rw_mbytes_per_sec": 0, 00:10:50.173 "r_mbytes_per_sec": 0, 00:10:50.173 "w_mbytes_per_sec": 0 00:10:50.173 }, 00:10:50.173 "claimed": false, 00:10:50.173 "zoned": false, 00:10:50.173 "supported_io_types": { 00:10:50.173 "read": true, 00:10:50.173 "write": true, 00:10:50.173 "unmap": false, 00:10:50.173 "flush": false, 00:10:50.174 "reset": true, 00:10:50.174 "nvme_admin": false, 00:10:50.174 "nvme_io": false, 00:10:50.174 "nvme_io_md": false, 00:10:50.174 "write_zeroes": true, 00:10:50.174 "zcopy": false, 00:10:50.174 "get_zone_info": false, 00:10:50.174 "zone_management": false, 00:10:50.174 "zone_append": false, 00:10:50.174 "compare": false, 00:10:50.174 "compare_and_write": false, 00:10:50.174 "abort": false, 00:10:50.174 "seek_hole": false, 00:10:50.174 "seek_data": false, 00:10:50.174 "copy": false, 00:10:50.174 "nvme_iov_md": false 00:10:50.174 }, 00:10:50.174 "memory_domains": [ 00:10:50.174 { 00:10:50.174 "dma_device_id": "system", 00:10:50.174 "dma_device_type": 1 00:10:50.174 }, 00:10:50.174 { 00:10:50.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.174 "dma_device_type": 2 00:10:50.174 }, 00:10:50.174 { 00:10:50.174 "dma_device_id": "system", 00:10:50.174 "dma_device_type": 1 00:10:50.174 }, 00:10:50.174 { 00:10:50.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.174 "dma_device_type": 2 00:10:50.174 }, 00:10:50.174 { 00:10:50.174 "dma_device_id": "system", 00:10:50.174 "dma_device_type": 1 00:10:50.174 }, 00:10:50.174 { 00:10:50.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.174 "dma_device_type": 2 00:10:50.174 } 00:10:50.174 ], 00:10:50.174 "driver_specific": { 00:10:50.174 "raid": { 00:10:50.174 "uuid": "d4e96654-d9ab-42c2-8642-a5bc162d0ef4", 00:10:50.174 "strip_size_kb": 0, 00:10:50.174 "state": "online", 00:10:50.174 "raid_level": "raid1", 00:10:50.174 "superblock": true, 00:10:50.174 "num_base_bdevs": 3, 00:10:50.174 "num_base_bdevs_discovered": 3, 00:10:50.174 "num_base_bdevs_operational": 3, 00:10:50.174 "base_bdevs_list": [ 00:10:50.174 { 00:10:50.174 "name": "BaseBdev1", 00:10:50.174 "uuid": "7775df4e-7e72-4c42-8c32-51d7e4978b9c", 00:10:50.174 "is_configured": true, 00:10:50.174 "data_offset": 2048, 00:10:50.174 "data_size": 63488 00:10:50.174 }, 00:10:50.174 { 00:10:50.174 "name": "BaseBdev2", 00:10:50.174 "uuid": "edc044c5-2aea-42e2-af2c-58ffd2f00264", 00:10:50.174 "is_configured": true, 00:10:50.174 "data_offset": 2048, 00:10:50.174 "data_size": 63488 00:10:50.174 }, 00:10:50.174 { 00:10:50.174 "name": "BaseBdev3", 00:10:50.174 "uuid": "9c25aaa8-25b1-49bf-af06-d86eee1cfb8d", 00:10:50.174 "is_configured": true, 00:10:50.174 "data_offset": 2048, 00:10:50.174 "data_size": 63488 00:10:50.174 } 00:10:50.174 ] 00:10:50.174 } 00:10:50.174 } 00:10:50.174 }' 00:10:50.174 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.174 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.174 BaseBdev2 00:10:50.174 BaseBdev3' 00:10:50.174 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.433 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.433 [2024-11-20 14:27:51.399399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.691 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.691 "name": "Existed_Raid", 00:10:50.691 "uuid": "d4e96654-d9ab-42c2-8642-a5bc162d0ef4", 00:10:50.691 "strip_size_kb": 0, 00:10:50.691 "state": "online", 00:10:50.691 "raid_level": "raid1", 00:10:50.691 "superblock": true, 00:10:50.691 "num_base_bdevs": 3, 00:10:50.691 "num_base_bdevs_discovered": 2, 00:10:50.691 "num_base_bdevs_operational": 2, 00:10:50.691 "base_bdevs_list": [ 00:10:50.691 { 00:10:50.691 "name": null, 00:10:50.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.691 "is_configured": false, 00:10:50.691 "data_offset": 0, 00:10:50.692 "data_size": 63488 00:10:50.692 }, 00:10:50.692 { 00:10:50.692 "name": "BaseBdev2", 00:10:50.692 "uuid": "edc044c5-2aea-42e2-af2c-58ffd2f00264", 00:10:50.692 "is_configured": true, 00:10:50.692 "data_offset": 2048, 00:10:50.692 "data_size": 63488 00:10:50.692 }, 00:10:50.692 { 00:10:50.692 "name": "BaseBdev3", 00:10:50.692 "uuid": "9c25aaa8-25b1-49bf-af06-d86eee1cfb8d", 00:10:50.692 "is_configured": true, 00:10:50.692 "data_offset": 2048, 00:10:50.692 "data_size": 63488 00:10:50.692 } 00:10:50.692 ] 00:10:50.692 }' 00:10:50.692 14:27:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.692 14:27:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.258 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.259 [2024-11-20 14:27:52.079617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.259 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.259 [2024-11-20 14:27:52.224506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.259 [2024-11-20 14:27:52.224979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.517 [2024-11-20 14:27:52.317060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.517 [2024-11-20 14:27:52.317159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.517 [2024-11-20 14:27:52.317184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.517 BaseBdev2 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.517 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 [ 00:10:51.518 { 00:10:51.518 "name": "BaseBdev2", 00:10:51.518 "aliases": [ 00:10:51.518 "59d52bd3-665d-46b5-82bc-7f47bb2a9989" 00:10:51.518 ], 00:10:51.518 "product_name": "Malloc disk", 00:10:51.518 "block_size": 512, 00:10:51.518 "num_blocks": 65536, 00:10:51.518 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:51.518 "assigned_rate_limits": { 00:10:51.518 "rw_ios_per_sec": 0, 00:10:51.518 "rw_mbytes_per_sec": 0, 00:10:51.518 "r_mbytes_per_sec": 0, 00:10:51.518 "w_mbytes_per_sec": 0 00:10:51.518 }, 00:10:51.518 "claimed": false, 00:10:51.518 "zoned": false, 00:10:51.518 "supported_io_types": { 00:10:51.518 "read": true, 00:10:51.518 "write": true, 00:10:51.518 "unmap": true, 00:10:51.518 "flush": true, 00:10:51.518 "reset": true, 00:10:51.518 "nvme_admin": false, 00:10:51.518 "nvme_io": false, 00:10:51.518 "nvme_io_md": false, 00:10:51.518 "write_zeroes": true, 00:10:51.518 "zcopy": true, 00:10:51.518 "get_zone_info": false, 00:10:51.518 "zone_management": false, 00:10:51.518 "zone_append": false, 00:10:51.518 "compare": false, 00:10:51.518 "compare_and_write": false, 00:10:51.518 "abort": true, 00:10:51.518 "seek_hole": false, 00:10:51.518 "seek_data": false, 00:10:51.518 "copy": true, 00:10:51.518 "nvme_iov_md": false 00:10:51.518 }, 00:10:51.518 "memory_domains": [ 00:10:51.518 { 00:10:51.518 "dma_device_id": "system", 00:10:51.518 "dma_device_type": 1 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.518 "dma_device_type": 2 00:10:51.518 } 00:10:51.518 ], 00:10:51.518 "driver_specific": {} 00:10:51.518 } 00:10:51.518 ] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 BaseBdev3 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 [ 00:10:51.518 { 00:10:51.518 "name": "BaseBdev3", 00:10:51.518 "aliases": [ 00:10:51.518 "0aa154dd-7028-412f-9679-d9b62287876c" 00:10:51.518 ], 00:10:51.518 "product_name": "Malloc disk", 00:10:51.518 "block_size": 512, 00:10:51.518 "num_blocks": 65536, 00:10:51.518 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:51.518 "assigned_rate_limits": { 00:10:51.518 "rw_ios_per_sec": 0, 00:10:51.518 "rw_mbytes_per_sec": 0, 00:10:51.518 "r_mbytes_per_sec": 0, 00:10:51.518 "w_mbytes_per_sec": 0 00:10:51.518 }, 00:10:51.518 "claimed": false, 00:10:51.518 "zoned": false, 00:10:51.518 "supported_io_types": { 00:10:51.518 "read": true, 00:10:51.518 "write": true, 00:10:51.518 "unmap": true, 00:10:51.518 "flush": true, 00:10:51.518 "reset": true, 00:10:51.518 "nvme_admin": false, 00:10:51.518 "nvme_io": false, 00:10:51.518 "nvme_io_md": false, 00:10:51.518 "write_zeroes": true, 00:10:51.518 "zcopy": true, 00:10:51.518 "get_zone_info": false, 00:10:51.518 "zone_management": false, 00:10:51.518 "zone_append": false, 00:10:51.518 "compare": false, 00:10:51.518 "compare_and_write": false, 00:10:51.518 "abort": true, 00:10:51.518 "seek_hole": false, 00:10:51.518 "seek_data": false, 00:10:51.518 "copy": true, 00:10:51.518 "nvme_iov_md": false 00:10:51.518 }, 00:10:51.518 "memory_domains": [ 00:10:51.518 { 00:10:51.518 "dma_device_id": "system", 00:10:51.518 "dma_device_type": 1 00:10:51.518 }, 00:10:51.518 { 00:10:51.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.518 "dma_device_type": 2 00:10:51.518 } 00:10:51.518 ], 00:10:51.518 "driver_specific": {} 00:10:51.518 } 00:10:51.518 ] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 [2024-11-20 14:27:52.550448] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.518 [2024-11-20 14:27:52.550694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.518 [2024-11-20 14:27:52.550747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.518 [2024-11-20 14:27:52.553884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.518 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.777 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.777 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.777 "name": "Existed_Raid", 00:10:51.777 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:51.777 "strip_size_kb": 0, 00:10:51.777 "state": "configuring", 00:10:51.777 "raid_level": "raid1", 00:10:51.777 "superblock": true, 00:10:51.777 "num_base_bdevs": 3, 00:10:51.777 "num_base_bdevs_discovered": 2, 00:10:51.777 "num_base_bdevs_operational": 3, 00:10:51.777 "base_bdevs_list": [ 00:10:51.777 { 00:10:51.777 "name": "BaseBdev1", 00:10:51.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.777 "is_configured": false, 00:10:51.777 "data_offset": 0, 00:10:51.777 "data_size": 0 00:10:51.777 }, 00:10:51.777 { 00:10:51.777 "name": "BaseBdev2", 00:10:51.777 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:51.777 "is_configured": true, 00:10:51.777 "data_offset": 2048, 00:10:51.777 "data_size": 63488 00:10:51.777 }, 00:10:51.777 { 00:10:51.777 "name": "BaseBdev3", 00:10:51.777 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:51.777 "is_configured": true, 00:10:51.777 "data_offset": 2048, 00:10:51.777 "data_size": 63488 00:10:51.777 } 00:10:51.777 ] 00:10:51.777 }' 00:10:51.777 14:27:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.777 14:27:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.035 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.036 [2024-11-20 14:27:53.062610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.036 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.294 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.294 "name": "Existed_Raid", 00:10:52.294 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:52.294 "strip_size_kb": 0, 00:10:52.294 "state": "configuring", 00:10:52.294 "raid_level": "raid1", 00:10:52.294 "superblock": true, 00:10:52.294 "num_base_bdevs": 3, 00:10:52.294 "num_base_bdevs_discovered": 1, 00:10:52.294 "num_base_bdevs_operational": 3, 00:10:52.294 "base_bdevs_list": [ 00:10:52.294 { 00:10:52.294 "name": "BaseBdev1", 00:10:52.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.294 "is_configured": false, 00:10:52.294 "data_offset": 0, 00:10:52.294 "data_size": 0 00:10:52.294 }, 00:10:52.294 { 00:10:52.294 "name": null, 00:10:52.294 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:52.294 "is_configured": false, 00:10:52.294 "data_offset": 0, 00:10:52.294 "data_size": 63488 00:10:52.294 }, 00:10:52.294 { 00:10:52.294 "name": "BaseBdev3", 00:10:52.294 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:52.294 "is_configured": true, 00:10:52.294 "data_offset": 2048, 00:10:52.294 "data_size": 63488 00:10:52.294 } 00:10:52.294 ] 00:10:52.294 }' 00:10:52.294 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.294 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.553 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.553 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.553 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.553 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.812 [2024-11-20 14:27:53.684432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.812 BaseBdev1 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.812 [ 00:10:52.812 { 00:10:52.812 "name": "BaseBdev1", 00:10:52.812 "aliases": [ 00:10:52.812 "3cb18351-004f-47d8-9296-b7a790db7b1f" 00:10:52.812 ], 00:10:52.812 "product_name": "Malloc disk", 00:10:52.812 "block_size": 512, 00:10:52.812 "num_blocks": 65536, 00:10:52.812 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:52.812 "assigned_rate_limits": { 00:10:52.812 "rw_ios_per_sec": 0, 00:10:52.812 "rw_mbytes_per_sec": 0, 00:10:52.812 "r_mbytes_per_sec": 0, 00:10:52.812 "w_mbytes_per_sec": 0 00:10:52.812 }, 00:10:52.812 "claimed": true, 00:10:52.812 "claim_type": "exclusive_write", 00:10:52.812 "zoned": false, 00:10:52.812 "supported_io_types": { 00:10:52.812 "read": true, 00:10:52.812 "write": true, 00:10:52.812 "unmap": true, 00:10:52.812 "flush": true, 00:10:52.812 "reset": true, 00:10:52.812 "nvme_admin": false, 00:10:52.812 "nvme_io": false, 00:10:52.812 "nvme_io_md": false, 00:10:52.812 "write_zeroes": true, 00:10:52.812 "zcopy": true, 00:10:52.812 "get_zone_info": false, 00:10:52.812 "zone_management": false, 00:10:52.812 "zone_append": false, 00:10:52.812 "compare": false, 00:10:52.812 "compare_and_write": false, 00:10:52.812 "abort": true, 00:10:52.812 "seek_hole": false, 00:10:52.812 "seek_data": false, 00:10:52.812 "copy": true, 00:10:52.812 "nvme_iov_md": false 00:10:52.812 }, 00:10:52.812 "memory_domains": [ 00:10:52.812 { 00:10:52.812 "dma_device_id": "system", 00:10:52.812 "dma_device_type": 1 00:10:52.812 }, 00:10:52.812 { 00:10:52.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.812 "dma_device_type": 2 00:10:52.812 } 00:10:52.812 ], 00:10:52.812 "driver_specific": {} 00:10:52.812 } 00:10:52.812 ] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.812 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.812 "name": "Existed_Raid", 00:10:52.813 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:52.813 "strip_size_kb": 0, 00:10:52.813 "state": "configuring", 00:10:52.813 "raid_level": "raid1", 00:10:52.813 "superblock": true, 00:10:52.813 "num_base_bdevs": 3, 00:10:52.813 "num_base_bdevs_discovered": 2, 00:10:52.813 "num_base_bdevs_operational": 3, 00:10:52.813 "base_bdevs_list": [ 00:10:52.813 { 00:10:52.813 "name": "BaseBdev1", 00:10:52.813 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:52.813 "is_configured": true, 00:10:52.813 "data_offset": 2048, 00:10:52.813 "data_size": 63488 00:10:52.813 }, 00:10:52.813 { 00:10:52.813 "name": null, 00:10:52.813 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:52.813 "is_configured": false, 00:10:52.813 "data_offset": 0, 00:10:52.813 "data_size": 63488 00:10:52.813 }, 00:10:52.813 { 00:10:52.813 "name": "BaseBdev3", 00:10:52.813 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:52.813 "is_configured": true, 00:10:52.813 "data_offset": 2048, 00:10:52.813 "data_size": 63488 00:10:52.813 } 00:10:52.813 ] 00:10:52.813 }' 00:10:52.813 14:27:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.813 14:27:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.380 [2024-11-20 14:27:54.312803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.380 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.381 "name": "Existed_Raid", 00:10:53.381 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:53.381 "strip_size_kb": 0, 00:10:53.381 "state": "configuring", 00:10:53.381 "raid_level": "raid1", 00:10:53.381 "superblock": true, 00:10:53.381 "num_base_bdevs": 3, 00:10:53.381 "num_base_bdevs_discovered": 1, 00:10:53.381 "num_base_bdevs_operational": 3, 00:10:53.381 "base_bdevs_list": [ 00:10:53.381 { 00:10:53.381 "name": "BaseBdev1", 00:10:53.381 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:53.381 "is_configured": true, 00:10:53.381 "data_offset": 2048, 00:10:53.381 "data_size": 63488 00:10:53.381 }, 00:10:53.381 { 00:10:53.381 "name": null, 00:10:53.381 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:53.381 "is_configured": false, 00:10:53.381 "data_offset": 0, 00:10:53.381 "data_size": 63488 00:10:53.381 }, 00:10:53.381 { 00:10:53.381 "name": null, 00:10:53.381 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:53.381 "is_configured": false, 00:10:53.381 "data_offset": 0, 00:10:53.381 "data_size": 63488 00:10:53.381 } 00:10:53.381 ] 00:10:53.381 }' 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.381 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 [2024-11-20 14:27:54.888961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.947 "name": "Existed_Raid", 00:10:53.947 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:53.947 "strip_size_kb": 0, 00:10:53.947 "state": "configuring", 00:10:53.947 "raid_level": "raid1", 00:10:53.947 "superblock": true, 00:10:53.947 "num_base_bdevs": 3, 00:10:53.947 "num_base_bdevs_discovered": 2, 00:10:53.947 "num_base_bdevs_operational": 3, 00:10:53.947 "base_bdevs_list": [ 00:10:53.947 { 00:10:53.947 "name": "BaseBdev1", 00:10:53.947 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:53.947 "is_configured": true, 00:10:53.947 "data_offset": 2048, 00:10:53.947 "data_size": 63488 00:10:53.947 }, 00:10:53.947 { 00:10:53.947 "name": null, 00:10:53.947 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:53.947 "is_configured": false, 00:10:53.947 "data_offset": 0, 00:10:53.947 "data_size": 63488 00:10:53.947 }, 00:10:53.947 { 00:10:53.947 "name": "BaseBdev3", 00:10:53.947 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:53.947 "is_configured": true, 00:10:53.947 "data_offset": 2048, 00:10:53.947 "data_size": 63488 00:10:53.947 } 00:10:53.947 ] 00:10:53.947 }' 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.947 14:27:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.515 [2024-11-20 14:27:55.449123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.515 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.774 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.774 "name": "Existed_Raid", 00:10:54.774 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:54.774 "strip_size_kb": 0, 00:10:54.774 "state": "configuring", 00:10:54.774 "raid_level": "raid1", 00:10:54.774 "superblock": true, 00:10:54.774 "num_base_bdevs": 3, 00:10:54.774 "num_base_bdevs_discovered": 1, 00:10:54.774 "num_base_bdevs_operational": 3, 00:10:54.774 "base_bdevs_list": [ 00:10:54.774 { 00:10:54.774 "name": null, 00:10:54.774 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:54.774 "is_configured": false, 00:10:54.774 "data_offset": 0, 00:10:54.774 "data_size": 63488 00:10:54.774 }, 00:10:54.774 { 00:10:54.774 "name": null, 00:10:54.774 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:54.774 "is_configured": false, 00:10:54.774 "data_offset": 0, 00:10:54.774 "data_size": 63488 00:10:54.774 }, 00:10:54.774 { 00:10:54.774 "name": "BaseBdev3", 00:10:54.774 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:54.774 "is_configured": true, 00:10:54.774 "data_offset": 2048, 00:10:54.774 "data_size": 63488 00:10:54.774 } 00:10:54.774 ] 00:10:54.774 }' 00:10:54.774 14:27:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.774 14:27:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.033 [2024-11-20 14:27:56.081959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.033 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.346 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.346 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.346 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.346 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.346 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.346 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.346 "name": "Existed_Raid", 00:10:55.346 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:55.346 "strip_size_kb": 0, 00:10:55.346 "state": "configuring", 00:10:55.346 "raid_level": "raid1", 00:10:55.346 "superblock": true, 00:10:55.346 "num_base_bdevs": 3, 00:10:55.346 "num_base_bdevs_discovered": 2, 00:10:55.346 "num_base_bdevs_operational": 3, 00:10:55.346 "base_bdevs_list": [ 00:10:55.346 { 00:10:55.346 "name": null, 00:10:55.346 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:55.346 "is_configured": false, 00:10:55.346 "data_offset": 0, 00:10:55.346 "data_size": 63488 00:10:55.346 }, 00:10:55.346 { 00:10:55.346 "name": "BaseBdev2", 00:10:55.346 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:55.346 "is_configured": true, 00:10:55.346 "data_offset": 2048, 00:10:55.346 "data_size": 63488 00:10:55.346 }, 00:10:55.346 { 00:10:55.346 "name": "BaseBdev3", 00:10:55.346 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:55.346 "is_configured": true, 00:10:55.346 "data_offset": 2048, 00:10:55.346 "data_size": 63488 00:10:55.346 } 00:10:55.346 ] 00:10:55.346 }' 00:10:55.346 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.347 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.636 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:55.636 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.636 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.636 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.636 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.636 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3cb18351-004f-47d8-9296-b7a790db7b1f 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.637 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.969 [2024-11-20 14:27:56.710876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:55.969 [2024-11-20 14:27:56.711445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.969 [2024-11-20 14:27:56.711470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:55.969 [2024-11-20 14:27:56.711808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:55.969 NewBaseBdev 00:10:55.969 [2024-11-20 14:27:56.712001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.969 [2024-11-20 14:27:56.712023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:55.969 [2024-11-20 14:27:56.712192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.969 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.969 [ 00:10:55.969 { 00:10:55.970 "name": "NewBaseBdev", 00:10:55.970 "aliases": [ 00:10:55.970 "3cb18351-004f-47d8-9296-b7a790db7b1f" 00:10:55.970 ], 00:10:55.970 "product_name": "Malloc disk", 00:10:55.970 "block_size": 512, 00:10:55.970 "num_blocks": 65536, 00:10:55.970 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:55.970 "assigned_rate_limits": { 00:10:55.970 "rw_ios_per_sec": 0, 00:10:55.970 "rw_mbytes_per_sec": 0, 00:10:55.970 "r_mbytes_per_sec": 0, 00:10:55.970 "w_mbytes_per_sec": 0 00:10:55.970 }, 00:10:55.970 "claimed": true, 00:10:55.970 "claim_type": "exclusive_write", 00:10:55.970 "zoned": false, 00:10:55.970 "supported_io_types": { 00:10:55.970 "read": true, 00:10:55.970 "write": true, 00:10:55.970 "unmap": true, 00:10:55.970 "flush": true, 00:10:55.970 "reset": true, 00:10:55.970 "nvme_admin": false, 00:10:55.970 "nvme_io": false, 00:10:55.970 "nvme_io_md": false, 00:10:55.970 "write_zeroes": true, 00:10:55.970 "zcopy": true, 00:10:55.970 "get_zone_info": false, 00:10:55.970 "zone_management": false, 00:10:55.970 "zone_append": false, 00:10:55.970 "compare": false, 00:10:55.970 "compare_and_write": false, 00:10:55.970 "abort": true, 00:10:55.970 "seek_hole": false, 00:10:55.970 "seek_data": false, 00:10:55.970 "copy": true, 00:10:55.970 "nvme_iov_md": false 00:10:55.970 }, 00:10:55.970 "memory_domains": [ 00:10:55.970 { 00:10:55.970 "dma_device_id": "system", 00:10:55.970 "dma_device_type": 1 00:10:55.970 }, 00:10:55.970 { 00:10:55.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.970 "dma_device_type": 2 00:10:55.970 } 00:10:55.970 ], 00:10:55.970 "driver_specific": {} 00:10:55.970 } 00:10:55.970 ] 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.970 "name": "Existed_Raid", 00:10:55.970 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:55.970 "strip_size_kb": 0, 00:10:55.970 "state": "online", 00:10:55.970 "raid_level": "raid1", 00:10:55.970 "superblock": true, 00:10:55.970 "num_base_bdevs": 3, 00:10:55.970 "num_base_bdevs_discovered": 3, 00:10:55.970 "num_base_bdevs_operational": 3, 00:10:55.970 "base_bdevs_list": [ 00:10:55.970 { 00:10:55.970 "name": "NewBaseBdev", 00:10:55.970 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:55.970 "is_configured": true, 00:10:55.970 "data_offset": 2048, 00:10:55.970 "data_size": 63488 00:10:55.970 }, 00:10:55.970 { 00:10:55.970 "name": "BaseBdev2", 00:10:55.970 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:55.970 "is_configured": true, 00:10:55.970 "data_offset": 2048, 00:10:55.970 "data_size": 63488 00:10:55.970 }, 00:10:55.970 { 00:10:55.970 "name": "BaseBdev3", 00:10:55.970 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:55.970 "is_configured": true, 00:10:55.970 "data_offset": 2048, 00:10:55.970 "data_size": 63488 00:10:55.970 } 00:10:55.970 ] 00:10:55.970 }' 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.970 14:27:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.229 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.229 [2024-11-20 14:27:57.263490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.487 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.487 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.487 "name": "Existed_Raid", 00:10:56.487 "aliases": [ 00:10:56.487 "9282ba89-6453-427b-86ea-87f8fb6a31ea" 00:10:56.487 ], 00:10:56.487 "product_name": "Raid Volume", 00:10:56.487 "block_size": 512, 00:10:56.487 "num_blocks": 63488, 00:10:56.487 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:56.487 "assigned_rate_limits": { 00:10:56.487 "rw_ios_per_sec": 0, 00:10:56.487 "rw_mbytes_per_sec": 0, 00:10:56.487 "r_mbytes_per_sec": 0, 00:10:56.487 "w_mbytes_per_sec": 0 00:10:56.487 }, 00:10:56.487 "claimed": false, 00:10:56.487 "zoned": false, 00:10:56.487 "supported_io_types": { 00:10:56.487 "read": true, 00:10:56.487 "write": true, 00:10:56.487 "unmap": false, 00:10:56.487 "flush": false, 00:10:56.487 "reset": true, 00:10:56.487 "nvme_admin": false, 00:10:56.487 "nvme_io": false, 00:10:56.487 "nvme_io_md": false, 00:10:56.487 "write_zeroes": true, 00:10:56.487 "zcopy": false, 00:10:56.487 "get_zone_info": false, 00:10:56.487 "zone_management": false, 00:10:56.487 "zone_append": false, 00:10:56.487 "compare": false, 00:10:56.487 "compare_and_write": false, 00:10:56.487 "abort": false, 00:10:56.487 "seek_hole": false, 00:10:56.487 "seek_data": false, 00:10:56.487 "copy": false, 00:10:56.487 "nvme_iov_md": false 00:10:56.487 }, 00:10:56.487 "memory_domains": [ 00:10:56.487 { 00:10:56.487 "dma_device_id": "system", 00:10:56.487 "dma_device_type": 1 00:10:56.487 }, 00:10:56.487 { 00:10:56.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.487 "dma_device_type": 2 00:10:56.487 }, 00:10:56.487 { 00:10:56.487 "dma_device_id": "system", 00:10:56.487 "dma_device_type": 1 00:10:56.487 }, 00:10:56.487 { 00:10:56.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.487 "dma_device_type": 2 00:10:56.487 }, 00:10:56.487 { 00:10:56.487 "dma_device_id": "system", 00:10:56.487 "dma_device_type": 1 00:10:56.487 }, 00:10:56.487 { 00:10:56.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.487 "dma_device_type": 2 00:10:56.487 } 00:10:56.488 ], 00:10:56.488 "driver_specific": { 00:10:56.488 "raid": { 00:10:56.488 "uuid": "9282ba89-6453-427b-86ea-87f8fb6a31ea", 00:10:56.488 "strip_size_kb": 0, 00:10:56.488 "state": "online", 00:10:56.488 "raid_level": "raid1", 00:10:56.488 "superblock": true, 00:10:56.488 "num_base_bdevs": 3, 00:10:56.488 "num_base_bdevs_discovered": 3, 00:10:56.488 "num_base_bdevs_operational": 3, 00:10:56.488 "base_bdevs_list": [ 00:10:56.488 { 00:10:56.488 "name": "NewBaseBdev", 00:10:56.488 "uuid": "3cb18351-004f-47d8-9296-b7a790db7b1f", 00:10:56.488 "is_configured": true, 00:10:56.488 "data_offset": 2048, 00:10:56.488 "data_size": 63488 00:10:56.488 }, 00:10:56.488 { 00:10:56.488 "name": "BaseBdev2", 00:10:56.488 "uuid": "59d52bd3-665d-46b5-82bc-7f47bb2a9989", 00:10:56.488 "is_configured": true, 00:10:56.488 "data_offset": 2048, 00:10:56.488 "data_size": 63488 00:10:56.488 }, 00:10:56.488 { 00:10:56.488 "name": "BaseBdev3", 00:10:56.488 "uuid": "0aa154dd-7028-412f-9679-d9b62287876c", 00:10:56.488 "is_configured": true, 00:10:56.488 "data_offset": 2048, 00:10:56.488 "data_size": 63488 00:10:56.488 } 00:10:56.488 ] 00:10:56.488 } 00:10:56.488 } 00:10:56.488 }' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:56.488 BaseBdev2 00:10:56.488 BaseBdev3' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.488 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.746 [2024-11-20 14:27:57.635148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.746 [2024-11-20 14:27:57.635424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.746 [2024-11-20 14:27:57.635645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.746 [2024-11-20 14:27:57.636165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.746 [2024-11-20 14:27:57.636194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68166 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68166 ']' 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68166 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68166 00:10:56.746 killing process with pid 68166 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.746 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68166' 00:10:56.747 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68166 00:10:56.747 [2024-11-20 14:27:57.677838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.747 14:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68166 00:10:57.005 [2024-11-20 14:27:57.960635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.380 14:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:58.380 00:10:58.380 real 0m12.086s 00:10:58.380 user 0m19.875s 00:10:58.380 sys 0m1.655s 00:10:58.380 ************************************ 00:10:58.380 END TEST raid_state_function_test_sb 00:10:58.380 ************************************ 00:10:58.380 14:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.380 14:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.380 14:27:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:58.380 14:27:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:58.380 14:27:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.380 14:27:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.380 ************************************ 00:10:58.380 START TEST raid_superblock_test 00:10:58.380 ************************************ 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68804 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68804 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68804 ']' 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:58.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.380 14:27:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.380 [2024-11-20 14:27:59.271867] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:10:58.380 [2024-11-20 14:27:59.272048] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68804 ] 00:10:58.640 [2024-11-20 14:27:59.464437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.640 [2024-11-20 14:27:59.636320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.898 [2024-11-20 14:27:59.863075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.898 [2024-11-20 14:27:59.863165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.466 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.466 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.466 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 malloc1 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 [2024-11-20 14:28:00.356735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.467 [2024-11-20 14:28:00.356818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.467 [2024-11-20 14:28:00.356852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:59.467 [2024-11-20 14:28:00.356868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.467 [2024-11-20 14:28:00.359988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.467 [2024-11-20 14:28:00.360275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.467 pt1 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 malloc2 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 [2024-11-20 14:28:00.416926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.467 [2024-11-20 14:28:00.417281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.467 [2024-11-20 14:28:00.417473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:59.467 [2024-11-20 14:28:00.417605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.467 [2024-11-20 14:28:00.420533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.467 [2024-11-20 14:28:00.420705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.467 pt2 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 malloc3 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 [2024-11-20 14:28:00.493363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:59.467 [2024-11-20 14:28:00.493684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.467 [2024-11-20 14:28:00.493885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:59.467 [2024-11-20 14:28:00.494025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.467 [2024-11-20 14:28:00.496913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.467 [2024-11-20 14:28:00.496959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:59.467 pt3 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.467 [2024-11-20 14:28:00.501684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.467 [2024-11-20 14:28:00.504379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.467 [2024-11-20 14:28:00.504597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:59.467 [2024-11-20 14:28:00.504966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:59.467 [2024-11-20 14:28:00.505002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:59.467 [2024-11-20 14:28:00.505319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:59.467 [2024-11-20 14:28:00.505551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:59.467 [2024-11-20 14:28:00.505572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:59.467 [2024-11-20 14:28:00.505846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.467 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.726 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.726 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.726 "name": "raid_bdev1", 00:10:59.726 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:10:59.726 "strip_size_kb": 0, 00:10:59.726 "state": "online", 00:10:59.726 "raid_level": "raid1", 00:10:59.726 "superblock": true, 00:10:59.726 "num_base_bdevs": 3, 00:10:59.726 "num_base_bdevs_discovered": 3, 00:10:59.726 "num_base_bdevs_operational": 3, 00:10:59.726 "base_bdevs_list": [ 00:10:59.726 { 00:10:59.726 "name": "pt1", 00:10:59.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.726 "is_configured": true, 00:10:59.726 "data_offset": 2048, 00:10:59.726 "data_size": 63488 00:10:59.726 }, 00:10:59.726 { 00:10:59.726 "name": "pt2", 00:10:59.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.726 "is_configured": true, 00:10:59.726 "data_offset": 2048, 00:10:59.726 "data_size": 63488 00:10:59.726 }, 00:10:59.726 { 00:10:59.726 "name": "pt3", 00:10:59.726 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.726 "is_configured": true, 00:10:59.726 "data_offset": 2048, 00:10:59.726 "data_size": 63488 00:10:59.726 } 00:10:59.726 ] 00:10:59.726 }' 00:10:59.726 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.726 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.990 14:28:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.990 [2024-11-20 14:28:00.994429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.990 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.990 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.990 "name": "raid_bdev1", 00:10:59.990 "aliases": [ 00:10:59.990 "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda" 00:10:59.990 ], 00:10:59.990 "product_name": "Raid Volume", 00:10:59.990 "block_size": 512, 00:10:59.990 "num_blocks": 63488, 00:10:59.990 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:10:59.990 "assigned_rate_limits": { 00:10:59.990 "rw_ios_per_sec": 0, 00:10:59.990 "rw_mbytes_per_sec": 0, 00:10:59.990 "r_mbytes_per_sec": 0, 00:10:59.990 "w_mbytes_per_sec": 0 00:10:59.990 }, 00:10:59.990 "claimed": false, 00:10:59.990 "zoned": false, 00:10:59.990 "supported_io_types": { 00:10:59.990 "read": true, 00:10:59.990 "write": true, 00:10:59.990 "unmap": false, 00:10:59.990 "flush": false, 00:10:59.990 "reset": true, 00:10:59.990 "nvme_admin": false, 00:10:59.990 "nvme_io": false, 00:10:59.990 "nvme_io_md": false, 00:10:59.990 "write_zeroes": true, 00:10:59.990 "zcopy": false, 00:10:59.990 "get_zone_info": false, 00:10:59.990 "zone_management": false, 00:10:59.990 "zone_append": false, 00:10:59.990 "compare": false, 00:10:59.990 "compare_and_write": false, 00:10:59.990 "abort": false, 00:10:59.990 "seek_hole": false, 00:10:59.990 "seek_data": false, 00:10:59.990 "copy": false, 00:10:59.990 "nvme_iov_md": false 00:10:59.990 }, 00:10:59.990 "memory_domains": [ 00:10:59.990 { 00:10:59.990 "dma_device_id": "system", 00:10:59.990 "dma_device_type": 1 00:10:59.990 }, 00:10:59.990 { 00:10:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.990 "dma_device_type": 2 00:10:59.990 }, 00:10:59.990 { 00:10:59.990 "dma_device_id": "system", 00:10:59.990 "dma_device_type": 1 00:10:59.990 }, 00:10:59.990 { 00:10:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.990 "dma_device_type": 2 00:10:59.990 }, 00:10:59.990 { 00:10:59.990 "dma_device_id": "system", 00:10:59.990 "dma_device_type": 1 00:10:59.990 }, 00:10:59.990 { 00:10:59.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.990 "dma_device_type": 2 00:10:59.990 } 00:10:59.990 ], 00:10:59.990 "driver_specific": { 00:10:59.990 "raid": { 00:10:59.990 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:10:59.990 "strip_size_kb": 0, 00:10:59.990 "state": "online", 00:10:59.991 "raid_level": "raid1", 00:10:59.991 "superblock": true, 00:10:59.991 "num_base_bdevs": 3, 00:10:59.991 "num_base_bdevs_discovered": 3, 00:10:59.991 "num_base_bdevs_operational": 3, 00:10:59.991 "base_bdevs_list": [ 00:10:59.991 { 00:10:59.991 "name": "pt1", 00:10:59.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.991 "is_configured": true, 00:10:59.991 "data_offset": 2048, 00:10:59.991 "data_size": 63488 00:10:59.991 }, 00:10:59.991 { 00:10:59.991 "name": "pt2", 00:10:59.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.991 "is_configured": true, 00:10:59.991 "data_offset": 2048, 00:10:59.991 "data_size": 63488 00:10:59.991 }, 00:10:59.991 { 00:10:59.991 "name": "pt3", 00:10:59.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.991 "is_configured": true, 00:10:59.991 "data_offset": 2048, 00:10:59.991 "data_size": 63488 00:10:59.991 } 00:10:59.991 ] 00:10:59.991 } 00:10:59.991 } 00:10:59.991 }' 00:10:59.991 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.259 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.259 pt2 00:11:00.259 pt3' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.260 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.260 [2024-11-20 14:28:01.302438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8cec8123-5a7e-4f52-8e72-bcf5b13d2dda 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8cec8123-5a7e-4f52-8e72-bcf5b13d2dda ']' 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 [2024-11-20 14:28:01.346023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.518 [2024-11-20 14:28:01.346204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.518 [2024-11-20 14:28:01.346453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.518 [2024-11-20 14:28:01.346687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.518 [2024-11-20 14:28:01.346812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.518 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 [2024-11-20 14:28:01.506101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:00.519 [2024-11-20 14:28:01.508883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:00.519 [2024-11-20 14:28:01.508982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:00.519 [2024-11-20 14:28:01.509060] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:00.519 [2024-11-20 14:28:01.509131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:00.519 [2024-11-20 14:28:01.509164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:00.519 [2024-11-20 14:28:01.509191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.519 [2024-11-20 14:28:01.509205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:00.519 request: 00:11:00.519 { 00:11:00.519 "name": "raid_bdev1", 00:11:00.519 "raid_level": "raid1", 00:11:00.519 "base_bdevs": [ 00:11:00.519 "malloc1", 00:11:00.519 "malloc2", 00:11:00.519 "malloc3" 00:11:00.519 ], 00:11:00.519 "superblock": false, 00:11:00.519 "method": "bdev_raid_create", 00:11:00.519 "req_id": 1 00:11:00.519 } 00:11:00.519 Got JSON-RPC error response 00:11:00.519 response: 00:11:00.519 { 00:11:00.519 "code": -17, 00:11:00.519 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:00.519 } 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.519 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.519 [2024-11-20 14:28:01.570117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.519 [2024-11-20 14:28:01.570357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.519 [2024-11-20 14:28:01.570483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:00.519 [2024-11-20 14:28:01.570595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.777 [2024-11-20 14:28:01.573728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.777 pt1 00:11:00.777 [2024-11-20 14:28:01.573889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.777 [2024-11-20 14:28:01.574017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:00.777 [2024-11-20 14:28:01.574089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.777 "name": "raid_bdev1", 00:11:00.777 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:00.777 "strip_size_kb": 0, 00:11:00.777 "state": "configuring", 00:11:00.777 "raid_level": "raid1", 00:11:00.777 "superblock": true, 00:11:00.777 "num_base_bdevs": 3, 00:11:00.777 "num_base_bdevs_discovered": 1, 00:11:00.777 "num_base_bdevs_operational": 3, 00:11:00.777 "base_bdevs_list": [ 00:11:00.777 { 00:11:00.777 "name": "pt1", 00:11:00.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.777 "is_configured": true, 00:11:00.777 "data_offset": 2048, 00:11:00.777 "data_size": 63488 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "name": null, 00:11:00.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.777 "is_configured": false, 00:11:00.777 "data_offset": 2048, 00:11:00.777 "data_size": 63488 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "name": null, 00:11:00.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.777 "is_configured": false, 00:11:00.777 "data_offset": 2048, 00:11:00.777 "data_size": 63488 00:11:00.777 } 00:11:00.777 ] 00:11:00.777 }' 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.777 14:28:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.343 [2024-11-20 14:28:02.094320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.343 [2024-11-20 14:28:02.094681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.343 [2024-11-20 14:28:02.094732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:01.343 [2024-11-20 14:28:02.094749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.343 [2024-11-20 14:28:02.095499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.343 [2024-11-20 14:28:02.095537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.343 [2024-11-20 14:28:02.095742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:01.343 [2024-11-20 14:28:02.095796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.343 pt2 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.343 [2024-11-20 14:28:02.102302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.343 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.343 "name": "raid_bdev1", 00:11:01.343 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:01.343 "strip_size_kb": 0, 00:11:01.343 "state": "configuring", 00:11:01.343 "raid_level": "raid1", 00:11:01.343 "superblock": true, 00:11:01.343 "num_base_bdevs": 3, 00:11:01.343 "num_base_bdevs_discovered": 1, 00:11:01.343 "num_base_bdevs_operational": 3, 00:11:01.343 "base_bdevs_list": [ 00:11:01.343 { 00:11:01.344 "name": "pt1", 00:11:01.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.344 "is_configured": true, 00:11:01.344 "data_offset": 2048, 00:11:01.344 "data_size": 63488 00:11:01.344 }, 00:11:01.344 { 00:11:01.344 "name": null, 00:11:01.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.344 "is_configured": false, 00:11:01.344 "data_offset": 0, 00:11:01.344 "data_size": 63488 00:11:01.344 }, 00:11:01.344 { 00:11:01.344 "name": null, 00:11:01.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.344 "is_configured": false, 00:11:01.344 "data_offset": 2048, 00:11:01.344 "data_size": 63488 00:11:01.344 } 00:11:01.344 ] 00:11:01.344 }' 00:11:01.344 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.344 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.602 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:01.602 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.602 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.602 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 [2024-11-20 14:28:02.614446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.603 [2024-11-20 14:28:02.614749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.603 [2024-11-20 14:28:02.614928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:01.603 [2024-11-20 14:28:02.614968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.603 [2024-11-20 14:28:02.615722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.603 [2024-11-20 14:28:02.615760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.603 [2024-11-20 14:28:02.615889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:01.603 [2024-11-20 14:28:02.615948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.603 pt2 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 [2024-11-20 14:28:02.622399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:01.603 [2024-11-20 14:28:02.622619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.603 [2024-11-20 14:28:02.622827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:01.603 [2024-11-20 14:28:02.622997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.603 [2024-11-20 14:28:02.623706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.603 [2024-11-20 14:28:02.623900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:01.603 [2024-11-20 14:28:02.624160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:01.603 [2024-11-20 14:28:02.624363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:01.603 [2024-11-20 14:28:02.624756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:01.603 [2024-11-20 14:28:02.624794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:01.603 [2024-11-20 14:28:02.625183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:01.603 [2024-11-20 14:28:02.625441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:01.603 [2024-11-20 14:28:02.625462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:01.603 [2024-11-20 14:28:02.625721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.603 pt3 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.603 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.861 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.861 "name": "raid_bdev1", 00:11:01.861 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:01.861 "strip_size_kb": 0, 00:11:01.861 "state": "online", 00:11:01.861 "raid_level": "raid1", 00:11:01.861 "superblock": true, 00:11:01.861 "num_base_bdevs": 3, 00:11:01.861 "num_base_bdevs_discovered": 3, 00:11:01.861 "num_base_bdevs_operational": 3, 00:11:01.861 "base_bdevs_list": [ 00:11:01.861 { 00:11:01.861 "name": "pt1", 00:11:01.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.861 "is_configured": true, 00:11:01.861 "data_offset": 2048, 00:11:01.861 "data_size": 63488 00:11:01.861 }, 00:11:01.861 { 00:11:01.861 "name": "pt2", 00:11:01.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.861 "is_configured": true, 00:11:01.861 "data_offset": 2048, 00:11:01.861 "data_size": 63488 00:11:01.861 }, 00:11:01.861 { 00:11:01.861 "name": "pt3", 00:11:01.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.861 "is_configured": true, 00:11:01.861 "data_offset": 2048, 00:11:01.861 "data_size": 63488 00:11:01.861 } 00:11:01.861 ] 00:11:01.861 }' 00:11:01.861 14:28:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.861 14:28:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:02.119 [2024-11-20 14:28:03.151014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.119 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.377 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.377 "name": "raid_bdev1", 00:11:02.377 "aliases": [ 00:11:02.377 "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda" 00:11:02.377 ], 00:11:02.377 "product_name": "Raid Volume", 00:11:02.377 "block_size": 512, 00:11:02.377 "num_blocks": 63488, 00:11:02.377 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:02.377 "assigned_rate_limits": { 00:11:02.377 "rw_ios_per_sec": 0, 00:11:02.377 "rw_mbytes_per_sec": 0, 00:11:02.377 "r_mbytes_per_sec": 0, 00:11:02.377 "w_mbytes_per_sec": 0 00:11:02.377 }, 00:11:02.377 "claimed": false, 00:11:02.377 "zoned": false, 00:11:02.377 "supported_io_types": { 00:11:02.377 "read": true, 00:11:02.377 "write": true, 00:11:02.377 "unmap": false, 00:11:02.377 "flush": false, 00:11:02.377 "reset": true, 00:11:02.377 "nvme_admin": false, 00:11:02.377 "nvme_io": false, 00:11:02.377 "nvme_io_md": false, 00:11:02.377 "write_zeroes": true, 00:11:02.377 "zcopy": false, 00:11:02.377 "get_zone_info": false, 00:11:02.377 "zone_management": false, 00:11:02.377 "zone_append": false, 00:11:02.377 "compare": false, 00:11:02.377 "compare_and_write": false, 00:11:02.377 "abort": false, 00:11:02.377 "seek_hole": false, 00:11:02.377 "seek_data": false, 00:11:02.377 "copy": false, 00:11:02.377 "nvme_iov_md": false 00:11:02.377 }, 00:11:02.377 "memory_domains": [ 00:11:02.377 { 00:11:02.377 "dma_device_id": "system", 00:11:02.377 "dma_device_type": 1 00:11:02.377 }, 00:11:02.377 { 00:11:02.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.377 "dma_device_type": 2 00:11:02.377 }, 00:11:02.377 { 00:11:02.377 "dma_device_id": "system", 00:11:02.377 "dma_device_type": 1 00:11:02.377 }, 00:11:02.377 { 00:11:02.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.377 "dma_device_type": 2 00:11:02.377 }, 00:11:02.377 { 00:11:02.377 "dma_device_id": "system", 00:11:02.377 "dma_device_type": 1 00:11:02.377 }, 00:11:02.377 { 00:11:02.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.378 "dma_device_type": 2 00:11:02.378 } 00:11:02.378 ], 00:11:02.378 "driver_specific": { 00:11:02.378 "raid": { 00:11:02.378 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:02.378 "strip_size_kb": 0, 00:11:02.378 "state": "online", 00:11:02.378 "raid_level": "raid1", 00:11:02.378 "superblock": true, 00:11:02.378 "num_base_bdevs": 3, 00:11:02.378 "num_base_bdevs_discovered": 3, 00:11:02.378 "num_base_bdevs_operational": 3, 00:11:02.378 "base_bdevs_list": [ 00:11:02.378 { 00:11:02.378 "name": "pt1", 00:11:02.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:02.378 "is_configured": true, 00:11:02.378 "data_offset": 2048, 00:11:02.378 "data_size": 63488 00:11:02.378 }, 00:11:02.378 { 00:11:02.378 "name": "pt2", 00:11:02.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.378 "is_configured": true, 00:11:02.378 "data_offset": 2048, 00:11:02.378 "data_size": 63488 00:11:02.378 }, 00:11:02.378 { 00:11:02.378 "name": "pt3", 00:11:02.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.378 "is_configured": true, 00:11:02.378 "data_offset": 2048, 00:11:02.378 "data_size": 63488 00:11:02.378 } 00:11:02.378 ] 00:11:02.378 } 00:11:02.378 } 00:11:02.378 }' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:02.378 pt2 00:11:02.378 pt3' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.378 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:02.637 [2024-11-20 14:28:03.487029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8cec8123-5a7e-4f52-8e72-bcf5b13d2dda '!=' 8cec8123-5a7e-4f52-8e72-bcf5b13d2dda ']' 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 [2024-11-20 14:28:03.542759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.637 "name": "raid_bdev1", 00:11:02.637 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:02.637 "strip_size_kb": 0, 00:11:02.637 "state": "online", 00:11:02.637 "raid_level": "raid1", 00:11:02.637 "superblock": true, 00:11:02.637 "num_base_bdevs": 3, 00:11:02.637 "num_base_bdevs_discovered": 2, 00:11:02.637 "num_base_bdevs_operational": 2, 00:11:02.637 "base_bdevs_list": [ 00:11:02.637 { 00:11:02.637 "name": null, 00:11:02.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.637 "is_configured": false, 00:11:02.637 "data_offset": 0, 00:11:02.637 "data_size": 63488 00:11:02.637 }, 00:11:02.637 { 00:11:02.637 "name": "pt2", 00:11:02.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:02.637 "is_configured": true, 00:11:02.637 "data_offset": 2048, 00:11:02.637 "data_size": 63488 00:11:02.637 }, 00:11:02.637 { 00:11:02.637 "name": "pt3", 00:11:02.637 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:02.637 "is_configured": true, 00:11:02.637 "data_offset": 2048, 00:11:02.637 "data_size": 63488 00:11:02.637 } 00:11:02.637 ] 00:11:02.637 }' 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.637 14:28:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.203 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.203 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 [2024-11-20 14:28:04.038864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.204 [2024-11-20 14:28:04.038906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.204 [2024-11-20 14:28:04.039021] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.204 [2024-11-20 14:28:04.039105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.204 [2024-11-20 14:28:04.039137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 [2024-11-20 14:28:04.126786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:03.204 [2024-11-20 14:28:04.126872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.204 [2024-11-20 14:28:04.126899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:03.204 [2024-11-20 14:28:04.126934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.204 [2024-11-20 14:28:04.130112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.204 [2024-11-20 14:28:04.130165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:03.204 [2024-11-20 14:28:04.130266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:03.204 [2024-11-20 14:28:04.130336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:03.204 pt2 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.204 "name": "raid_bdev1", 00:11:03.204 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:03.204 "strip_size_kb": 0, 00:11:03.204 "state": "configuring", 00:11:03.204 "raid_level": "raid1", 00:11:03.204 "superblock": true, 00:11:03.204 "num_base_bdevs": 3, 00:11:03.204 "num_base_bdevs_discovered": 1, 00:11:03.204 "num_base_bdevs_operational": 2, 00:11:03.204 "base_bdevs_list": [ 00:11:03.204 { 00:11:03.204 "name": null, 00:11:03.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.204 "is_configured": false, 00:11:03.204 "data_offset": 2048, 00:11:03.204 "data_size": 63488 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": "pt2", 00:11:03.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.204 "is_configured": true, 00:11:03.204 "data_offset": 2048, 00:11:03.204 "data_size": 63488 00:11:03.204 }, 00:11:03.204 { 00:11:03.204 "name": null, 00:11:03.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.204 "is_configured": false, 00:11:03.204 "data_offset": 2048, 00:11:03.204 "data_size": 63488 00:11:03.204 } 00:11:03.204 ] 00:11:03.204 }' 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.204 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.769 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:03.769 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:03.769 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:03.769 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:03.769 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.769 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.769 [2024-11-20 14:28:04.626991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:03.770 [2024-11-20 14:28:04.627103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.770 [2024-11-20 14:28:04.627137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:03.770 [2024-11-20 14:28:04.627156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.770 [2024-11-20 14:28:04.627800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.770 [2024-11-20 14:28:04.627844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:03.770 [2024-11-20 14:28:04.627964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:03.770 [2024-11-20 14:28:04.628007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:03.770 [2024-11-20 14:28:04.628171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:03.770 [2024-11-20 14:28:04.628202] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:03.770 [2024-11-20 14:28:04.628546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:03.770 [2024-11-20 14:28:04.628777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:03.770 [2024-11-20 14:28:04.628831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:03.770 [2024-11-20 14:28:04.629015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.770 pt3 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.770 "name": "raid_bdev1", 00:11:03.770 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:03.770 "strip_size_kb": 0, 00:11:03.770 "state": "online", 00:11:03.770 "raid_level": "raid1", 00:11:03.770 "superblock": true, 00:11:03.770 "num_base_bdevs": 3, 00:11:03.770 "num_base_bdevs_discovered": 2, 00:11:03.770 "num_base_bdevs_operational": 2, 00:11:03.770 "base_bdevs_list": [ 00:11:03.770 { 00:11:03.770 "name": null, 00:11:03.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.770 "is_configured": false, 00:11:03.770 "data_offset": 2048, 00:11:03.770 "data_size": 63488 00:11:03.770 }, 00:11:03.770 { 00:11:03.770 "name": "pt2", 00:11:03.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.770 "is_configured": true, 00:11:03.770 "data_offset": 2048, 00:11:03.770 "data_size": 63488 00:11:03.770 }, 00:11:03.770 { 00:11:03.770 "name": "pt3", 00:11:03.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.770 "is_configured": true, 00:11:03.770 "data_offset": 2048, 00:11:03.770 "data_size": 63488 00:11:03.770 } 00:11:03.770 ] 00:11:03.770 }' 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.770 14:28:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.337 [2024-11-20 14:28:05.147088] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.337 [2024-11-20 14:28:05.147131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.337 [2024-11-20 14:28:05.147237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.337 [2024-11-20 14:28:05.147326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.337 [2024-11-20 14:28:05.147342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.337 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.338 [2024-11-20 14:28:05.215110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.338 [2024-11-20 14:28:05.215179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.338 [2024-11-20 14:28:05.215210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:04.338 [2024-11-20 14:28:05.215225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.338 [2024-11-20 14:28:05.218159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.338 pt1 00:11:04.338 [2024-11-20 14:28:05.218347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.338 [2024-11-20 14:28:05.218469] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.338 [2024-11-20 14:28:05.218534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.338 [2024-11-20 14:28:05.218722] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:04.338 [2024-11-20 14:28:05.218741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.338 [2024-11-20 14:28:05.218763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:04.338 [2024-11-20 14:28:05.218835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.338 "name": "raid_bdev1", 00:11:04.338 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:04.338 "strip_size_kb": 0, 00:11:04.338 "state": "configuring", 00:11:04.338 "raid_level": "raid1", 00:11:04.338 "superblock": true, 00:11:04.338 "num_base_bdevs": 3, 00:11:04.338 "num_base_bdevs_discovered": 1, 00:11:04.338 "num_base_bdevs_operational": 2, 00:11:04.338 "base_bdevs_list": [ 00:11:04.338 { 00:11:04.338 "name": null, 00:11:04.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.338 "is_configured": false, 00:11:04.338 "data_offset": 2048, 00:11:04.338 "data_size": 63488 00:11:04.338 }, 00:11:04.338 { 00:11:04.338 "name": "pt2", 00:11:04.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.338 "is_configured": true, 00:11:04.338 "data_offset": 2048, 00:11:04.338 "data_size": 63488 00:11:04.338 }, 00:11:04.338 { 00:11:04.338 "name": null, 00:11:04.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.338 "is_configured": false, 00:11:04.338 "data_offset": 2048, 00:11:04.338 "data_size": 63488 00:11:04.338 } 00:11:04.338 ] 00:11:04.338 }' 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.338 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.903 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.903 [2024-11-20 14:28:05.783281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.903 [2024-11-20 14:28:05.783381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.903 [2024-11-20 14:28:05.783419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:04.903 [2024-11-20 14:28:05.783434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.903 [2024-11-20 14:28:05.784141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.903 [2024-11-20 14:28:05.784186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.903 [2024-11-20 14:28:05.784302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:04.903 [2024-11-20 14:28:05.784338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.903 [2024-11-20 14:28:05.784532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:04.903 [2024-11-20 14:28:05.784563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:04.903 [2024-11-20 14:28:05.784942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:04.904 pt3 00:11:04.904 [2024-11-20 14:28:05.785320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:04.904 [2024-11-20 14:28:05.785353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:04.904 [2024-11-20 14:28:05.785529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.904 "name": "raid_bdev1", 00:11:04.904 "uuid": "8cec8123-5a7e-4f52-8e72-bcf5b13d2dda", 00:11:04.904 "strip_size_kb": 0, 00:11:04.904 "state": "online", 00:11:04.904 "raid_level": "raid1", 00:11:04.904 "superblock": true, 00:11:04.904 "num_base_bdevs": 3, 00:11:04.904 "num_base_bdevs_discovered": 2, 00:11:04.904 "num_base_bdevs_operational": 2, 00:11:04.904 "base_bdevs_list": [ 00:11:04.904 { 00:11:04.904 "name": null, 00:11:04.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.904 "is_configured": false, 00:11:04.904 "data_offset": 2048, 00:11:04.904 "data_size": 63488 00:11:04.904 }, 00:11:04.904 { 00:11:04.904 "name": "pt2", 00:11:04.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.904 "is_configured": true, 00:11:04.904 "data_offset": 2048, 00:11:04.904 "data_size": 63488 00:11:04.904 }, 00:11:04.904 { 00:11:04.904 "name": "pt3", 00:11:04.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.904 "is_configured": true, 00:11:04.904 "data_offset": 2048, 00:11:04.904 "data_size": 63488 00:11:04.904 } 00:11:04.904 ] 00:11:04.904 }' 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.904 14:28:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.470 [2024-11-20 14:28:06.331768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8cec8123-5a7e-4f52-8e72-bcf5b13d2dda '!=' 8cec8123-5a7e-4f52-8e72-bcf5b13d2dda ']' 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68804 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68804 ']' 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68804 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68804 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68804' 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68804 00:11:05.470 [2024-11-20 14:28:06.406606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.470 14:28:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68804 00:11:05.470 [2024-11-20 14:28:06.406742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.470 killing process with pid 68804 00:11:05.470 [2024-11-20 14:28:06.406826] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.470 [2024-11-20 14:28:06.406845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:05.727 [2024-11-20 14:28:06.675609] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:06.658 14:28:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:06.658 00:11:06.658 real 0m8.546s 00:11:06.658 user 0m13.877s 00:11:06.658 sys 0m1.286s 00:11:06.658 ************************************ 00:11:06.658 END TEST raid_superblock_test 00:11:06.658 ************************************ 00:11:06.658 14:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.658 14:28:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.916 14:28:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:06.916 14:28:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:06.916 14:28:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.916 14:28:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:06.916 ************************************ 00:11:06.916 START TEST raid_read_error_test 00:11:06.916 ************************************ 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BFCqGxMvOX 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69254 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69254 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69254 ']' 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.916 14:28:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.916 [2024-11-20 14:28:07.890071] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:06.916 [2024-11-20 14:28:07.890501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69254 ] 00:11:07.174 [2024-11-20 14:28:08.083613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.431 [2024-11-20 14:28:08.242581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.689 [2024-11-20 14:28:08.507059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.689 [2024-11-20 14:28:08.507150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 BaseBdev1_malloc 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 true 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 [2024-11-20 14:28:08.929966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:07.948 [2024-11-20 14:28:08.930256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.948 [2024-11-20 14:28:08.930337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:07.948 [2024-11-20 14:28:08.930382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.948 [2024-11-20 14:28:08.934515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.948 [2024-11-20 14:28:08.934596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:07.948 BaseBdev1 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 BaseBdev2_malloc 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 true 00:11:07.948 14:28:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.948 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:07.948 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.948 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.208 [2024-11-20 14:28:09.007372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.208 [2024-11-20 14:28:09.007488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.208 [2024-11-20 14:28:09.007530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.208 [2024-11-20 14:28:09.007554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.208 [2024-11-20 14:28:09.011330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.208 [2024-11-20 14:28:09.011399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.208 BaseBdev2 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.208 BaseBdev3_malloc 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.208 true 00:11:08.208 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.209 [2024-11-20 14:28:09.088387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.209 [2024-11-20 14:28:09.088674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.209 [2024-11-20 14:28:09.088730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.209 [2024-11-20 14:28:09.088756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.209 [2024-11-20 14:28:09.092545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.209 [2024-11-20 14:28:09.092617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.209 BaseBdev3 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.209 [2024-11-20 14:28:09.101090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.209 [2024-11-20 14:28:09.104364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.209 [2024-11-20 14:28:09.104516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.209 [2024-11-20 14:28:09.104941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:08.209 [2024-11-20 14:28:09.104973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:08.209 [2024-11-20 14:28:09.105428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:08.209 [2024-11-20 14:28:09.105748] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:08.209 [2024-11-20 14:28:09.105790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:08.209 [2024-11-20 14:28:09.106123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.209 "name": "raid_bdev1", 00:11:08.209 "uuid": "a8b253e2-ce5b-4e21-a35d-2ef31fce0c66", 00:11:08.209 "strip_size_kb": 0, 00:11:08.209 "state": "online", 00:11:08.209 "raid_level": "raid1", 00:11:08.209 "superblock": true, 00:11:08.209 "num_base_bdevs": 3, 00:11:08.209 "num_base_bdevs_discovered": 3, 00:11:08.209 "num_base_bdevs_operational": 3, 00:11:08.209 "base_bdevs_list": [ 00:11:08.209 { 00:11:08.209 "name": "BaseBdev1", 00:11:08.209 "uuid": "0a2c6451-2859-538c-bc2e-ebb8ed3ea23f", 00:11:08.209 "is_configured": true, 00:11:08.209 "data_offset": 2048, 00:11:08.209 "data_size": 63488 00:11:08.209 }, 00:11:08.209 { 00:11:08.209 "name": "BaseBdev2", 00:11:08.209 "uuid": "456d308d-cb88-5dce-902d-d6abe8bb65a2", 00:11:08.209 "is_configured": true, 00:11:08.209 "data_offset": 2048, 00:11:08.209 "data_size": 63488 00:11:08.209 }, 00:11:08.209 { 00:11:08.209 "name": "BaseBdev3", 00:11:08.209 "uuid": "7f6abda7-f041-5711-a400-ccbc28a41482", 00:11:08.209 "is_configured": true, 00:11:08.209 "data_offset": 2048, 00:11:08.209 "data_size": 63488 00:11:08.209 } 00:11:08.209 ] 00:11:08.209 }' 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.209 14:28:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.775 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:08.775 14:28:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:08.775 [2024-11-20 14:28:09.802825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.708 "name": "raid_bdev1", 00:11:09.708 "uuid": "a8b253e2-ce5b-4e21-a35d-2ef31fce0c66", 00:11:09.708 "strip_size_kb": 0, 00:11:09.708 "state": "online", 00:11:09.708 "raid_level": "raid1", 00:11:09.708 "superblock": true, 00:11:09.708 "num_base_bdevs": 3, 00:11:09.708 "num_base_bdevs_discovered": 3, 00:11:09.708 "num_base_bdevs_operational": 3, 00:11:09.708 "base_bdevs_list": [ 00:11:09.708 { 00:11:09.708 "name": "BaseBdev1", 00:11:09.708 "uuid": "0a2c6451-2859-538c-bc2e-ebb8ed3ea23f", 00:11:09.708 "is_configured": true, 00:11:09.708 "data_offset": 2048, 00:11:09.708 "data_size": 63488 00:11:09.708 }, 00:11:09.708 { 00:11:09.708 "name": "BaseBdev2", 00:11:09.708 "uuid": "456d308d-cb88-5dce-902d-d6abe8bb65a2", 00:11:09.708 "is_configured": true, 00:11:09.708 "data_offset": 2048, 00:11:09.708 "data_size": 63488 00:11:09.708 }, 00:11:09.708 { 00:11:09.708 "name": "BaseBdev3", 00:11:09.708 "uuid": "7f6abda7-f041-5711-a400-ccbc28a41482", 00:11:09.708 "is_configured": true, 00:11:09.708 "data_offset": 2048, 00:11:09.708 "data_size": 63488 00:11:09.708 } 00:11:09.708 ] 00:11:09.708 }' 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.708 14:28:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.274 [2024-11-20 14:28:11.216684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.274 [2024-11-20 14:28:11.216879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.274 [2024-11-20 14:28:11.220462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.274 [2024-11-20 14:28:11.220672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.274 [2024-11-20 14:28:11.220940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.274 [2024-11-20 14:28:11.221102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:10.274 { 00:11:10.274 "results": [ 00:11:10.274 { 00:11:10.274 "job": "raid_bdev1", 00:11:10.274 "core_mask": "0x1", 00:11:10.274 "workload": "randrw", 00:11:10.274 "percentage": 50, 00:11:10.274 "status": "finished", 00:11:10.274 "queue_depth": 1, 00:11:10.274 "io_size": 131072, 00:11:10.274 "runtime": 1.41142, 00:11:10.274 "iops": 8883.252327443284, 00:11:10.274 "mibps": 1110.4065409304105, 00:11:10.274 "io_failed": 0, 00:11:10.274 "io_timeout": 0, 00:11:10.274 "avg_latency_us": 108.1923995417567, 00:11:10.274 "min_latency_us": 44.916363636363634, 00:11:10.274 "max_latency_us": 1817.1345454545456 00:11:10.274 } 00:11:10.274 ], 00:11:10.274 "core_count": 1 00:11:10.274 } 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69254 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69254 ']' 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69254 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69254 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69254' 00:11:10.274 killing process with pid 69254 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69254 00:11:10.274 14:28:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69254 00:11:10.274 [2024-11-20 14:28:11.260361] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.532 [2024-11-20 14:28:11.472538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BFCqGxMvOX 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:11.905 00:11:11.905 real 0m4.838s 00:11:11.905 user 0m5.979s 00:11:11.905 sys 0m0.621s 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.905 14:28:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.905 ************************************ 00:11:11.905 END TEST raid_read_error_test 00:11:11.905 ************************************ 00:11:11.905 14:28:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:11.905 14:28:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:11.905 14:28:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.905 14:28:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.905 ************************************ 00:11:11.905 START TEST raid_write_error_test 00:11:11.905 ************************************ 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EIn9HRGZdF 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69405 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69405 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69405 ']' 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.905 14:28:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.905 [2024-11-20 14:28:12.772103] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:11.905 [2024-11-20 14:28:12.772274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69405 ] 00:11:12.302 [2024-11-20 14:28:12.959917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.302 [2024-11-20 14:28:13.116145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.559 [2024-11-20 14:28:13.359007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.559 [2024-11-20 14:28:13.359307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 BaseBdev1_malloc 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 true 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 [2024-11-20 14:28:13.793642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:12.819 [2024-11-20 14:28:13.793712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.819 [2024-11-20 14:28:13.793744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:12.819 [2024-11-20 14:28:13.793772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.819 [2024-11-20 14:28:13.796597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.819 [2024-11-20 14:28:13.796674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.819 BaseBdev1 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 BaseBdev2_malloc 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 true 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.819 [2024-11-20 14:28:13.849913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:12.819 [2024-11-20 14:28:13.849986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.819 [2024-11-20 14:28:13.850014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.819 [2024-11-20 14:28:13.850032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.819 [2024-11-20 14:28:13.852868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.819 [2024-11-20 14:28:13.852919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.819 BaseBdev2 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.819 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.077 BaseBdev3_malloc 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.078 true 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.078 [2024-11-20 14:28:13.917460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:13.078 [2024-11-20 14:28:13.917533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.078 [2024-11-20 14:28:13.917562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:13.078 [2024-11-20 14:28:13.917582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.078 [2024-11-20 14:28:13.920439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.078 [2024-11-20 14:28:13.920491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:13.078 BaseBdev3 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.078 [2024-11-20 14:28:13.925563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.078 [2024-11-20 14:28:13.928051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.078 [2024-11-20 14:28:13.928157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.078 [2024-11-20 14:28:13.928447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:13.078 [2024-11-20 14:28:13.928466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:13.078 [2024-11-20 14:28:13.928814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:13.078 [2024-11-20 14:28:13.929041] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:13.078 [2024-11-20 14:28:13.929069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:13.078 [2024-11-20 14:28:13.929255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.078 "name": "raid_bdev1", 00:11:13.078 "uuid": "b971144b-96d8-4f0f-819d-c2ae90942580", 00:11:13.078 "strip_size_kb": 0, 00:11:13.078 "state": "online", 00:11:13.078 "raid_level": "raid1", 00:11:13.078 "superblock": true, 00:11:13.078 "num_base_bdevs": 3, 00:11:13.078 "num_base_bdevs_discovered": 3, 00:11:13.078 "num_base_bdevs_operational": 3, 00:11:13.078 "base_bdevs_list": [ 00:11:13.078 { 00:11:13.078 "name": "BaseBdev1", 00:11:13.078 "uuid": "245dd562-96dd-5d52-b113-e86244c45e01", 00:11:13.078 "is_configured": true, 00:11:13.078 "data_offset": 2048, 00:11:13.078 "data_size": 63488 00:11:13.078 }, 00:11:13.078 { 00:11:13.078 "name": "BaseBdev2", 00:11:13.078 "uuid": "ba92bda8-577e-514e-9d78-f6f8436da3da", 00:11:13.078 "is_configured": true, 00:11:13.078 "data_offset": 2048, 00:11:13.078 "data_size": 63488 00:11:13.078 }, 00:11:13.078 { 00:11:13.078 "name": "BaseBdev3", 00:11:13.078 "uuid": "b276707e-3f97-58da-ac81-ca29d50639fd", 00:11:13.078 "is_configured": true, 00:11:13.078 "data_offset": 2048, 00:11:13.078 "data_size": 63488 00:11:13.078 } 00:11:13.078 ] 00:11:13.078 }' 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.078 14:28:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.644 14:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.644 14:28:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.644 [2024-11-20 14:28:14.579205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.578 [2024-11-20 14:28:15.447416] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:14.578 [2024-11-20 14:28:15.447489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.578 [2024-11-20 14:28:15.447750] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.578 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.579 14:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.579 14:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.579 14:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.579 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.579 "name": "raid_bdev1", 00:11:14.579 "uuid": "b971144b-96d8-4f0f-819d-c2ae90942580", 00:11:14.579 "strip_size_kb": 0, 00:11:14.579 "state": "online", 00:11:14.579 "raid_level": "raid1", 00:11:14.579 "superblock": true, 00:11:14.579 "num_base_bdevs": 3, 00:11:14.579 "num_base_bdevs_discovered": 2, 00:11:14.579 "num_base_bdevs_operational": 2, 00:11:14.579 "base_bdevs_list": [ 00:11:14.579 { 00:11:14.579 "name": null, 00:11:14.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.579 "is_configured": false, 00:11:14.579 "data_offset": 0, 00:11:14.579 "data_size": 63488 00:11:14.579 }, 00:11:14.579 { 00:11:14.579 "name": "BaseBdev2", 00:11:14.579 "uuid": "ba92bda8-577e-514e-9d78-f6f8436da3da", 00:11:14.579 "is_configured": true, 00:11:14.579 "data_offset": 2048, 00:11:14.579 "data_size": 63488 00:11:14.579 }, 00:11:14.579 { 00:11:14.579 "name": "BaseBdev3", 00:11:14.579 "uuid": "b276707e-3f97-58da-ac81-ca29d50639fd", 00:11:14.579 "is_configured": true, 00:11:14.579 "data_offset": 2048, 00:11:14.579 "data_size": 63488 00:11:14.579 } 00:11:14.579 ] 00:11:14.579 }' 00:11:14.579 14:28:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.579 14:28:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.145 14:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.145 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.145 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.145 [2024-11-20 14:28:16.041832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.145 [2024-11-20 14:28:16.042029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.145 [2024-11-20 14:28:16.045424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.145 { 00:11:15.145 "results": [ 00:11:15.145 { 00:11:15.145 "job": "raid_bdev1", 00:11:15.145 "core_mask": "0x1", 00:11:15.145 "workload": "randrw", 00:11:15.145 "percentage": 50, 00:11:15.145 "status": "finished", 00:11:15.145 "queue_depth": 1, 00:11:15.145 "io_size": 131072, 00:11:15.145 "runtime": 1.460109, 00:11:15.145 "iops": 10509.489360040929, 00:11:15.145 "mibps": 1313.686170005116, 00:11:15.145 "io_failed": 0, 00:11:15.145 "io_timeout": 0, 00:11:15.145 "avg_latency_us": 90.91794614769395, 00:11:15.145 "min_latency_us": 44.21818181818182, 00:11:15.145 "max_latency_us": 1824.581818181818 00:11:15.145 } 00:11:15.145 ], 00:11:15.145 "core_count": 1 00:11:15.145 } 00:11:15.146 [2024-11-20 14:28:16.045659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.146 [2024-11-20 14:28:16.045839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.146 [2024-11-20 14:28:16.045866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69405 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69405 ']' 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69405 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69405 00:11:15.146 killing process with pid 69405 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69405' 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69405 00:11:15.146 14:28:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69405 00:11:15.146 [2024-11-20 14:28:16.085168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.404 [2024-11-20 14:28:16.292187] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EIn9HRGZdF 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:16.414 00:11:16.414 real 0m4.772s 00:11:16.414 user 0m5.962s 00:11:16.414 sys 0m0.584s 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.414 ************************************ 00:11:16.414 END TEST raid_write_error_test 00:11:16.414 ************************************ 00:11:16.414 14:28:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.414 14:28:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:16.414 14:28:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:16.414 14:28:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:16.414 14:28:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:16.414 14:28:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.414 14:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:16.672 ************************************ 00:11:16.672 START TEST raid_state_function_test 00:11:16.672 ************************************ 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:16.672 Process raid pid: 69543 00:11:16.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69543 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69543' 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69543 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69543 ']' 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:16.672 14:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.672 [2024-11-20 14:28:17.621697] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:16.672 [2024-11-20 14:28:17.622237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.930 [2024-11-20 14:28:17.812221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.930 [2024-11-20 14:28:17.945901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.188 [2024-11-20 14:28:18.156459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.188 [2024-11-20 14:28:18.156521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.753 [2024-11-20 14:28:18.589474] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.753 [2024-11-20 14:28:18.589559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.753 [2024-11-20 14:28:18.589586] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.753 [2024-11-20 14:28:18.589611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.753 [2024-11-20 14:28:18.589654] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.753 [2024-11-20 14:28:18.589681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.753 [2024-11-20 14:28:18.589697] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:17.753 [2024-11-20 14:28:18.589718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.753 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.753 "name": "Existed_Raid", 00:11:17.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.753 "strip_size_kb": 64, 00:11:17.753 "state": "configuring", 00:11:17.753 "raid_level": "raid0", 00:11:17.753 "superblock": false, 00:11:17.753 "num_base_bdevs": 4, 00:11:17.753 "num_base_bdevs_discovered": 0, 00:11:17.753 "num_base_bdevs_operational": 4, 00:11:17.753 "base_bdevs_list": [ 00:11:17.753 { 00:11:17.753 "name": "BaseBdev1", 00:11:17.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.753 "is_configured": false, 00:11:17.753 "data_offset": 0, 00:11:17.753 "data_size": 0 00:11:17.753 }, 00:11:17.753 { 00:11:17.753 "name": "BaseBdev2", 00:11:17.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.753 "is_configured": false, 00:11:17.753 "data_offset": 0, 00:11:17.753 "data_size": 0 00:11:17.753 }, 00:11:17.753 { 00:11:17.753 "name": "BaseBdev3", 00:11:17.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.753 "is_configured": false, 00:11:17.754 "data_offset": 0, 00:11:17.754 "data_size": 0 00:11:17.754 }, 00:11:17.754 { 00:11:17.754 "name": "BaseBdev4", 00:11:17.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.754 "is_configured": false, 00:11:17.754 "data_offset": 0, 00:11:17.754 "data_size": 0 00:11:17.754 } 00:11:17.754 ] 00:11:17.754 }' 00:11:17.754 14:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.754 14:28:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 [2024-11-20 14:28:19.105716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.318 [2024-11-20 14:28:19.105847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 [2024-11-20 14:28:19.113578] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.318 [2024-11-20 14:28:19.113678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.318 [2024-11-20 14:28:19.113706] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.318 [2024-11-20 14:28:19.113738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.318 [2024-11-20 14:28:19.113758] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.318 [2024-11-20 14:28:19.113810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.318 [2024-11-20 14:28:19.113832] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.318 [2024-11-20 14:28:19.113861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 [2024-11-20 14:28:19.162740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.318 BaseBdev1 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 [ 00:11:18.318 { 00:11:18.318 "name": "BaseBdev1", 00:11:18.318 "aliases": [ 00:11:18.318 "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c" 00:11:18.318 ], 00:11:18.318 "product_name": "Malloc disk", 00:11:18.318 "block_size": 512, 00:11:18.318 "num_blocks": 65536, 00:11:18.318 "uuid": "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c", 00:11:18.318 "assigned_rate_limits": { 00:11:18.318 "rw_ios_per_sec": 0, 00:11:18.318 "rw_mbytes_per_sec": 0, 00:11:18.318 "r_mbytes_per_sec": 0, 00:11:18.318 "w_mbytes_per_sec": 0 00:11:18.318 }, 00:11:18.318 "claimed": true, 00:11:18.318 "claim_type": "exclusive_write", 00:11:18.318 "zoned": false, 00:11:18.318 "supported_io_types": { 00:11:18.318 "read": true, 00:11:18.318 "write": true, 00:11:18.318 "unmap": true, 00:11:18.318 "flush": true, 00:11:18.318 "reset": true, 00:11:18.318 "nvme_admin": false, 00:11:18.318 "nvme_io": false, 00:11:18.318 "nvme_io_md": false, 00:11:18.318 "write_zeroes": true, 00:11:18.318 "zcopy": true, 00:11:18.318 "get_zone_info": false, 00:11:18.318 "zone_management": false, 00:11:18.318 "zone_append": false, 00:11:18.318 "compare": false, 00:11:18.318 "compare_and_write": false, 00:11:18.318 "abort": true, 00:11:18.318 "seek_hole": false, 00:11:18.318 "seek_data": false, 00:11:18.318 "copy": true, 00:11:18.318 "nvme_iov_md": false 00:11:18.318 }, 00:11:18.318 "memory_domains": [ 00:11:18.318 { 00:11:18.318 "dma_device_id": "system", 00:11:18.318 "dma_device_type": 1 00:11:18.318 }, 00:11:18.318 { 00:11:18.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.318 "dma_device_type": 2 00:11:18.318 } 00:11:18.318 ], 00:11:18.318 "driver_specific": {} 00:11:18.318 } 00:11:18.318 ] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.318 "name": "Existed_Raid", 00:11:18.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.318 "strip_size_kb": 64, 00:11:18.318 "state": "configuring", 00:11:18.318 "raid_level": "raid0", 00:11:18.318 "superblock": false, 00:11:18.318 "num_base_bdevs": 4, 00:11:18.318 "num_base_bdevs_discovered": 1, 00:11:18.318 "num_base_bdevs_operational": 4, 00:11:18.318 "base_bdevs_list": [ 00:11:18.318 { 00:11:18.318 "name": "BaseBdev1", 00:11:18.318 "uuid": "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c", 00:11:18.318 "is_configured": true, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 65536 00:11:18.318 }, 00:11:18.318 { 00:11:18.318 "name": "BaseBdev2", 00:11:18.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.318 "is_configured": false, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 0 00:11:18.318 }, 00:11:18.318 { 00:11:18.318 "name": "BaseBdev3", 00:11:18.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.318 "is_configured": false, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 0 00:11:18.318 }, 00:11:18.318 { 00:11:18.318 "name": "BaseBdev4", 00:11:18.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.318 "is_configured": false, 00:11:18.318 "data_offset": 0, 00:11:18.318 "data_size": 0 00:11:18.318 } 00:11:18.318 ] 00:11:18.318 }' 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.318 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.886 [2024-11-20 14:28:19.695057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.886 [2024-11-20 14:28:19.695169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.886 [2024-11-20 14:28:19.707035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.886 [2024-11-20 14:28:19.710058] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.886 [2024-11-20 14:28:19.710121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.886 [2024-11-20 14:28:19.710145] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.886 [2024-11-20 14:28:19.710174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.886 [2024-11-20 14:28:19.710189] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.886 [2024-11-20 14:28:19.710206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.886 "name": "Existed_Raid", 00:11:18.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.886 "strip_size_kb": 64, 00:11:18.886 "state": "configuring", 00:11:18.886 "raid_level": "raid0", 00:11:18.886 "superblock": false, 00:11:18.886 "num_base_bdevs": 4, 00:11:18.886 "num_base_bdevs_discovered": 1, 00:11:18.886 "num_base_bdevs_operational": 4, 00:11:18.886 "base_bdevs_list": [ 00:11:18.886 { 00:11:18.886 "name": "BaseBdev1", 00:11:18.886 "uuid": "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c", 00:11:18.886 "is_configured": true, 00:11:18.886 "data_offset": 0, 00:11:18.886 "data_size": 65536 00:11:18.886 }, 00:11:18.886 { 00:11:18.886 "name": "BaseBdev2", 00:11:18.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.886 "is_configured": false, 00:11:18.886 "data_offset": 0, 00:11:18.886 "data_size": 0 00:11:18.886 }, 00:11:18.886 { 00:11:18.886 "name": "BaseBdev3", 00:11:18.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.886 "is_configured": false, 00:11:18.886 "data_offset": 0, 00:11:18.886 "data_size": 0 00:11:18.886 }, 00:11:18.886 { 00:11:18.886 "name": "BaseBdev4", 00:11:18.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.886 "is_configured": false, 00:11:18.886 "data_offset": 0, 00:11:18.886 "data_size": 0 00:11:18.886 } 00:11:18.886 ] 00:11:18.886 }' 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.886 14:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.451 [2024-11-20 14:28:20.330364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.451 BaseBdev2 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.451 [ 00:11:19.451 { 00:11:19.451 "name": "BaseBdev2", 00:11:19.451 "aliases": [ 00:11:19.451 "f6915e2f-ddb5-44ff-b0f4-78d28db45c43" 00:11:19.451 ], 00:11:19.451 "product_name": "Malloc disk", 00:11:19.451 "block_size": 512, 00:11:19.451 "num_blocks": 65536, 00:11:19.451 "uuid": "f6915e2f-ddb5-44ff-b0f4-78d28db45c43", 00:11:19.451 "assigned_rate_limits": { 00:11:19.451 "rw_ios_per_sec": 0, 00:11:19.451 "rw_mbytes_per_sec": 0, 00:11:19.451 "r_mbytes_per_sec": 0, 00:11:19.451 "w_mbytes_per_sec": 0 00:11:19.451 }, 00:11:19.451 "claimed": true, 00:11:19.451 "claim_type": "exclusive_write", 00:11:19.451 "zoned": false, 00:11:19.451 "supported_io_types": { 00:11:19.451 "read": true, 00:11:19.451 "write": true, 00:11:19.451 "unmap": true, 00:11:19.451 "flush": true, 00:11:19.451 "reset": true, 00:11:19.451 "nvme_admin": false, 00:11:19.451 "nvme_io": false, 00:11:19.451 "nvme_io_md": false, 00:11:19.451 "write_zeroes": true, 00:11:19.451 "zcopy": true, 00:11:19.451 "get_zone_info": false, 00:11:19.451 "zone_management": false, 00:11:19.451 "zone_append": false, 00:11:19.451 "compare": false, 00:11:19.451 "compare_and_write": false, 00:11:19.451 "abort": true, 00:11:19.451 "seek_hole": false, 00:11:19.451 "seek_data": false, 00:11:19.451 "copy": true, 00:11:19.451 "nvme_iov_md": false 00:11:19.451 }, 00:11:19.451 "memory_domains": [ 00:11:19.451 { 00:11:19.451 "dma_device_id": "system", 00:11:19.451 "dma_device_type": 1 00:11:19.451 }, 00:11:19.451 { 00:11:19.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.451 "dma_device_type": 2 00:11:19.451 } 00:11:19.451 ], 00:11:19.451 "driver_specific": {} 00:11:19.451 } 00:11:19.451 ] 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.451 "name": "Existed_Raid", 00:11:19.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.451 "strip_size_kb": 64, 00:11:19.451 "state": "configuring", 00:11:19.451 "raid_level": "raid0", 00:11:19.451 "superblock": false, 00:11:19.451 "num_base_bdevs": 4, 00:11:19.451 "num_base_bdevs_discovered": 2, 00:11:19.451 "num_base_bdevs_operational": 4, 00:11:19.451 "base_bdevs_list": [ 00:11:19.451 { 00:11:19.451 "name": "BaseBdev1", 00:11:19.451 "uuid": "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c", 00:11:19.451 "is_configured": true, 00:11:19.451 "data_offset": 0, 00:11:19.451 "data_size": 65536 00:11:19.451 }, 00:11:19.451 { 00:11:19.451 "name": "BaseBdev2", 00:11:19.451 "uuid": "f6915e2f-ddb5-44ff-b0f4-78d28db45c43", 00:11:19.451 "is_configured": true, 00:11:19.451 "data_offset": 0, 00:11:19.451 "data_size": 65536 00:11:19.451 }, 00:11:19.451 { 00:11:19.451 "name": "BaseBdev3", 00:11:19.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.451 "is_configured": false, 00:11:19.451 "data_offset": 0, 00:11:19.451 "data_size": 0 00:11:19.451 }, 00:11:19.451 { 00:11:19.451 "name": "BaseBdev4", 00:11:19.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.451 "is_configured": false, 00:11:19.451 "data_offset": 0, 00:11:19.451 "data_size": 0 00:11:19.451 } 00:11:19.451 ] 00:11:19.451 }' 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.451 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.017 [2024-11-20 14:28:20.969265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.017 BaseBdev3 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.017 14:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.017 [ 00:11:20.017 { 00:11:20.017 "name": "BaseBdev3", 00:11:20.017 "aliases": [ 00:11:20.017 "f84a2713-4a37-4921-a21b-f262ced7f55b" 00:11:20.017 ], 00:11:20.017 "product_name": "Malloc disk", 00:11:20.017 "block_size": 512, 00:11:20.017 "num_blocks": 65536, 00:11:20.017 "uuid": "f84a2713-4a37-4921-a21b-f262ced7f55b", 00:11:20.017 "assigned_rate_limits": { 00:11:20.017 "rw_ios_per_sec": 0, 00:11:20.017 "rw_mbytes_per_sec": 0, 00:11:20.017 "r_mbytes_per_sec": 0, 00:11:20.017 "w_mbytes_per_sec": 0 00:11:20.017 }, 00:11:20.017 "claimed": true, 00:11:20.017 "claim_type": "exclusive_write", 00:11:20.017 "zoned": false, 00:11:20.017 "supported_io_types": { 00:11:20.017 "read": true, 00:11:20.017 "write": true, 00:11:20.017 "unmap": true, 00:11:20.017 "flush": true, 00:11:20.017 "reset": true, 00:11:20.017 "nvme_admin": false, 00:11:20.017 "nvme_io": false, 00:11:20.017 "nvme_io_md": false, 00:11:20.017 "write_zeroes": true, 00:11:20.017 "zcopy": true, 00:11:20.017 "get_zone_info": false, 00:11:20.017 "zone_management": false, 00:11:20.017 "zone_append": false, 00:11:20.017 "compare": false, 00:11:20.017 "compare_and_write": false, 00:11:20.018 "abort": true, 00:11:20.018 "seek_hole": false, 00:11:20.018 "seek_data": false, 00:11:20.018 "copy": true, 00:11:20.018 "nvme_iov_md": false 00:11:20.018 }, 00:11:20.018 "memory_domains": [ 00:11:20.018 { 00:11:20.018 "dma_device_id": "system", 00:11:20.018 "dma_device_type": 1 00:11:20.018 }, 00:11:20.018 { 00:11:20.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.018 "dma_device_type": 2 00:11:20.018 } 00:11:20.018 ], 00:11:20.018 "driver_specific": {} 00:11:20.018 } 00:11:20.018 ] 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.018 "name": "Existed_Raid", 00:11:20.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.018 "strip_size_kb": 64, 00:11:20.018 "state": "configuring", 00:11:20.018 "raid_level": "raid0", 00:11:20.018 "superblock": false, 00:11:20.018 "num_base_bdevs": 4, 00:11:20.018 "num_base_bdevs_discovered": 3, 00:11:20.018 "num_base_bdevs_operational": 4, 00:11:20.018 "base_bdevs_list": [ 00:11:20.018 { 00:11:20.018 "name": "BaseBdev1", 00:11:20.018 "uuid": "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c", 00:11:20.018 "is_configured": true, 00:11:20.018 "data_offset": 0, 00:11:20.018 "data_size": 65536 00:11:20.018 }, 00:11:20.018 { 00:11:20.018 "name": "BaseBdev2", 00:11:20.018 "uuid": "f6915e2f-ddb5-44ff-b0f4-78d28db45c43", 00:11:20.018 "is_configured": true, 00:11:20.018 "data_offset": 0, 00:11:20.018 "data_size": 65536 00:11:20.018 }, 00:11:20.018 { 00:11:20.018 "name": "BaseBdev3", 00:11:20.018 "uuid": "f84a2713-4a37-4921-a21b-f262ced7f55b", 00:11:20.018 "is_configured": true, 00:11:20.018 "data_offset": 0, 00:11:20.018 "data_size": 65536 00:11:20.018 }, 00:11:20.018 { 00:11:20.018 "name": "BaseBdev4", 00:11:20.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.018 "is_configured": false, 00:11:20.018 "data_offset": 0, 00:11:20.018 "data_size": 0 00:11:20.018 } 00:11:20.018 ] 00:11:20.018 }' 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.018 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.584 [2024-11-20 14:28:21.546988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.584 [2024-11-20 14:28:21.547069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:20.584 [2024-11-20 14:28:21.547089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:20.584 [2024-11-20 14:28:21.547456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:20.584 [2024-11-20 14:28:21.547708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:20.584 [2024-11-20 14:28:21.547737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:20.584 [2024-11-20 14:28:21.548085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.584 BaseBdev4 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.584 [ 00:11:20.584 { 00:11:20.584 "name": "BaseBdev4", 00:11:20.584 "aliases": [ 00:11:20.584 "f4fef69f-4d35-4a33-8e1b-a8dfbe01a82a" 00:11:20.584 ], 00:11:20.584 "product_name": "Malloc disk", 00:11:20.584 "block_size": 512, 00:11:20.584 "num_blocks": 65536, 00:11:20.584 "uuid": "f4fef69f-4d35-4a33-8e1b-a8dfbe01a82a", 00:11:20.584 "assigned_rate_limits": { 00:11:20.584 "rw_ios_per_sec": 0, 00:11:20.584 "rw_mbytes_per_sec": 0, 00:11:20.584 "r_mbytes_per_sec": 0, 00:11:20.584 "w_mbytes_per_sec": 0 00:11:20.584 }, 00:11:20.584 "claimed": true, 00:11:20.584 "claim_type": "exclusive_write", 00:11:20.584 "zoned": false, 00:11:20.584 "supported_io_types": { 00:11:20.584 "read": true, 00:11:20.584 "write": true, 00:11:20.584 "unmap": true, 00:11:20.584 "flush": true, 00:11:20.584 "reset": true, 00:11:20.584 "nvme_admin": false, 00:11:20.584 "nvme_io": false, 00:11:20.584 "nvme_io_md": false, 00:11:20.584 "write_zeroes": true, 00:11:20.584 "zcopy": true, 00:11:20.584 "get_zone_info": false, 00:11:20.584 "zone_management": false, 00:11:20.584 "zone_append": false, 00:11:20.584 "compare": false, 00:11:20.584 "compare_and_write": false, 00:11:20.584 "abort": true, 00:11:20.584 "seek_hole": false, 00:11:20.584 "seek_data": false, 00:11:20.584 "copy": true, 00:11:20.584 "nvme_iov_md": false 00:11:20.584 }, 00:11:20.584 "memory_domains": [ 00:11:20.584 { 00:11:20.584 "dma_device_id": "system", 00:11:20.584 "dma_device_type": 1 00:11:20.584 }, 00:11:20.584 { 00:11:20.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.584 "dma_device_type": 2 00:11:20.584 } 00:11:20.584 ], 00:11:20.584 "driver_specific": {} 00:11:20.584 } 00:11:20.584 ] 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.584 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.585 "name": "Existed_Raid", 00:11:20.585 "uuid": "ba9ccb59-6635-4ca2-ba83-b8423aa8e1df", 00:11:20.585 "strip_size_kb": 64, 00:11:20.585 "state": "online", 00:11:20.585 "raid_level": "raid0", 00:11:20.585 "superblock": false, 00:11:20.585 "num_base_bdevs": 4, 00:11:20.585 "num_base_bdevs_discovered": 4, 00:11:20.585 "num_base_bdevs_operational": 4, 00:11:20.585 "base_bdevs_list": [ 00:11:20.585 { 00:11:20.585 "name": "BaseBdev1", 00:11:20.585 "uuid": "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c", 00:11:20.585 "is_configured": true, 00:11:20.585 "data_offset": 0, 00:11:20.585 "data_size": 65536 00:11:20.585 }, 00:11:20.585 { 00:11:20.585 "name": "BaseBdev2", 00:11:20.585 "uuid": "f6915e2f-ddb5-44ff-b0f4-78d28db45c43", 00:11:20.585 "is_configured": true, 00:11:20.585 "data_offset": 0, 00:11:20.585 "data_size": 65536 00:11:20.585 }, 00:11:20.585 { 00:11:20.585 "name": "BaseBdev3", 00:11:20.585 "uuid": "f84a2713-4a37-4921-a21b-f262ced7f55b", 00:11:20.585 "is_configured": true, 00:11:20.585 "data_offset": 0, 00:11:20.585 "data_size": 65536 00:11:20.585 }, 00:11:20.585 { 00:11:20.585 "name": "BaseBdev4", 00:11:20.585 "uuid": "f4fef69f-4d35-4a33-8e1b-a8dfbe01a82a", 00:11:20.585 "is_configured": true, 00:11:20.585 "data_offset": 0, 00:11:20.585 "data_size": 65536 00:11:20.585 } 00:11:20.585 ] 00:11:20.585 }' 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.585 14:28:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.152 [2024-11-20 14:28:22.075714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.152 "name": "Existed_Raid", 00:11:21.152 "aliases": [ 00:11:21.152 "ba9ccb59-6635-4ca2-ba83-b8423aa8e1df" 00:11:21.152 ], 00:11:21.152 "product_name": "Raid Volume", 00:11:21.152 "block_size": 512, 00:11:21.152 "num_blocks": 262144, 00:11:21.152 "uuid": "ba9ccb59-6635-4ca2-ba83-b8423aa8e1df", 00:11:21.152 "assigned_rate_limits": { 00:11:21.152 "rw_ios_per_sec": 0, 00:11:21.152 "rw_mbytes_per_sec": 0, 00:11:21.152 "r_mbytes_per_sec": 0, 00:11:21.152 "w_mbytes_per_sec": 0 00:11:21.152 }, 00:11:21.152 "claimed": false, 00:11:21.152 "zoned": false, 00:11:21.152 "supported_io_types": { 00:11:21.152 "read": true, 00:11:21.152 "write": true, 00:11:21.152 "unmap": true, 00:11:21.152 "flush": true, 00:11:21.152 "reset": true, 00:11:21.152 "nvme_admin": false, 00:11:21.152 "nvme_io": false, 00:11:21.152 "nvme_io_md": false, 00:11:21.152 "write_zeroes": true, 00:11:21.152 "zcopy": false, 00:11:21.152 "get_zone_info": false, 00:11:21.152 "zone_management": false, 00:11:21.152 "zone_append": false, 00:11:21.152 "compare": false, 00:11:21.152 "compare_and_write": false, 00:11:21.152 "abort": false, 00:11:21.152 "seek_hole": false, 00:11:21.152 "seek_data": false, 00:11:21.152 "copy": false, 00:11:21.152 "nvme_iov_md": false 00:11:21.152 }, 00:11:21.152 "memory_domains": [ 00:11:21.152 { 00:11:21.152 "dma_device_id": "system", 00:11:21.152 "dma_device_type": 1 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.152 "dma_device_type": 2 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "dma_device_id": "system", 00:11:21.152 "dma_device_type": 1 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.152 "dma_device_type": 2 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "dma_device_id": "system", 00:11:21.152 "dma_device_type": 1 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.152 "dma_device_type": 2 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "dma_device_id": "system", 00:11:21.152 "dma_device_type": 1 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.152 "dma_device_type": 2 00:11:21.152 } 00:11:21.152 ], 00:11:21.152 "driver_specific": { 00:11:21.152 "raid": { 00:11:21.152 "uuid": "ba9ccb59-6635-4ca2-ba83-b8423aa8e1df", 00:11:21.152 "strip_size_kb": 64, 00:11:21.152 "state": "online", 00:11:21.152 "raid_level": "raid0", 00:11:21.152 "superblock": false, 00:11:21.152 "num_base_bdevs": 4, 00:11:21.152 "num_base_bdevs_discovered": 4, 00:11:21.152 "num_base_bdevs_operational": 4, 00:11:21.152 "base_bdevs_list": [ 00:11:21.152 { 00:11:21.152 "name": "BaseBdev1", 00:11:21.152 "uuid": "7811dc03-c6e5-4314-bfa1-d3a6d3d0177c", 00:11:21.152 "is_configured": true, 00:11:21.152 "data_offset": 0, 00:11:21.152 "data_size": 65536 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "name": "BaseBdev2", 00:11:21.152 "uuid": "f6915e2f-ddb5-44ff-b0f4-78d28db45c43", 00:11:21.152 "is_configured": true, 00:11:21.152 "data_offset": 0, 00:11:21.152 "data_size": 65536 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "name": "BaseBdev3", 00:11:21.152 "uuid": "f84a2713-4a37-4921-a21b-f262ced7f55b", 00:11:21.152 "is_configured": true, 00:11:21.152 "data_offset": 0, 00:11:21.152 "data_size": 65536 00:11:21.152 }, 00:11:21.152 { 00:11:21.152 "name": "BaseBdev4", 00:11:21.152 "uuid": "f4fef69f-4d35-4a33-8e1b-a8dfbe01a82a", 00:11:21.152 "is_configured": true, 00:11:21.152 "data_offset": 0, 00:11:21.152 "data_size": 65536 00:11:21.152 } 00:11:21.152 ] 00:11:21.152 } 00:11:21.152 } 00:11:21.152 }' 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:21.152 BaseBdev2 00:11:21.152 BaseBdev3 00:11:21.152 BaseBdev4' 00:11:21.152 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.411 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.411 [2024-11-20 14:28:22.423467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.412 [2024-11-20 14:28:22.423519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.412 [2024-11-20 14:28:22.423610] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.670 "name": "Existed_Raid", 00:11:21.670 "uuid": "ba9ccb59-6635-4ca2-ba83-b8423aa8e1df", 00:11:21.670 "strip_size_kb": 64, 00:11:21.670 "state": "offline", 00:11:21.670 "raid_level": "raid0", 00:11:21.670 "superblock": false, 00:11:21.670 "num_base_bdevs": 4, 00:11:21.670 "num_base_bdevs_discovered": 3, 00:11:21.670 "num_base_bdevs_operational": 3, 00:11:21.670 "base_bdevs_list": [ 00:11:21.670 { 00:11:21.670 "name": null, 00:11:21.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.670 "is_configured": false, 00:11:21.670 "data_offset": 0, 00:11:21.670 "data_size": 65536 00:11:21.670 }, 00:11:21.670 { 00:11:21.670 "name": "BaseBdev2", 00:11:21.670 "uuid": "f6915e2f-ddb5-44ff-b0f4-78d28db45c43", 00:11:21.670 "is_configured": true, 00:11:21.670 "data_offset": 0, 00:11:21.670 "data_size": 65536 00:11:21.670 }, 00:11:21.670 { 00:11:21.670 "name": "BaseBdev3", 00:11:21.670 "uuid": "f84a2713-4a37-4921-a21b-f262ced7f55b", 00:11:21.670 "is_configured": true, 00:11:21.670 "data_offset": 0, 00:11:21.670 "data_size": 65536 00:11:21.670 }, 00:11:21.670 { 00:11:21.670 "name": "BaseBdev4", 00:11:21.670 "uuid": "f4fef69f-4d35-4a33-8e1b-a8dfbe01a82a", 00:11:21.670 "is_configured": true, 00:11:21.670 "data_offset": 0, 00:11:21.670 "data_size": 65536 00:11:21.670 } 00:11:21.670 ] 00:11:21.670 }' 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.670 14:28:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.236 [2024-11-20 14:28:23.059144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.236 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.236 [2024-11-20 14:28:23.211568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.495 [2024-11-20 14:28:23.360166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:22.495 [2024-11-20 14:28:23.360360] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.495 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 BaseBdev2 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 [ 00:11:22.755 { 00:11:22.755 "name": "BaseBdev2", 00:11:22.755 "aliases": [ 00:11:22.755 "c1f97cc3-9f45-4f98-ac37-cc893737aedf" 00:11:22.755 ], 00:11:22.755 "product_name": "Malloc disk", 00:11:22.755 "block_size": 512, 00:11:22.755 "num_blocks": 65536, 00:11:22.755 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:22.755 "assigned_rate_limits": { 00:11:22.755 "rw_ios_per_sec": 0, 00:11:22.755 "rw_mbytes_per_sec": 0, 00:11:22.755 "r_mbytes_per_sec": 0, 00:11:22.755 "w_mbytes_per_sec": 0 00:11:22.755 }, 00:11:22.755 "claimed": false, 00:11:22.755 "zoned": false, 00:11:22.755 "supported_io_types": { 00:11:22.755 "read": true, 00:11:22.755 "write": true, 00:11:22.755 "unmap": true, 00:11:22.755 "flush": true, 00:11:22.755 "reset": true, 00:11:22.755 "nvme_admin": false, 00:11:22.755 "nvme_io": false, 00:11:22.755 "nvme_io_md": false, 00:11:22.755 "write_zeroes": true, 00:11:22.755 "zcopy": true, 00:11:22.755 "get_zone_info": false, 00:11:22.755 "zone_management": false, 00:11:22.755 "zone_append": false, 00:11:22.755 "compare": false, 00:11:22.755 "compare_and_write": false, 00:11:22.755 "abort": true, 00:11:22.755 "seek_hole": false, 00:11:22.755 "seek_data": false, 00:11:22.755 "copy": true, 00:11:22.755 "nvme_iov_md": false 00:11:22.755 }, 00:11:22.755 "memory_domains": [ 00:11:22.755 { 00:11:22.755 "dma_device_id": "system", 00:11:22.755 "dma_device_type": 1 00:11:22.755 }, 00:11:22.755 { 00:11:22.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.755 "dma_device_type": 2 00:11:22.755 } 00:11:22.755 ], 00:11:22.755 "driver_specific": {} 00:11:22.755 } 00:11:22.755 ] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 BaseBdev3 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 [ 00:11:22.755 { 00:11:22.755 "name": "BaseBdev3", 00:11:22.755 "aliases": [ 00:11:22.755 "f77cfddf-d284-449a-8ccb-db8f07631dd8" 00:11:22.755 ], 00:11:22.755 "product_name": "Malloc disk", 00:11:22.755 "block_size": 512, 00:11:22.755 "num_blocks": 65536, 00:11:22.755 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:22.755 "assigned_rate_limits": { 00:11:22.755 "rw_ios_per_sec": 0, 00:11:22.755 "rw_mbytes_per_sec": 0, 00:11:22.755 "r_mbytes_per_sec": 0, 00:11:22.755 "w_mbytes_per_sec": 0 00:11:22.755 }, 00:11:22.755 "claimed": false, 00:11:22.755 "zoned": false, 00:11:22.755 "supported_io_types": { 00:11:22.755 "read": true, 00:11:22.755 "write": true, 00:11:22.755 "unmap": true, 00:11:22.755 "flush": true, 00:11:22.755 "reset": true, 00:11:22.755 "nvme_admin": false, 00:11:22.755 "nvme_io": false, 00:11:22.755 "nvme_io_md": false, 00:11:22.755 "write_zeroes": true, 00:11:22.755 "zcopy": true, 00:11:22.755 "get_zone_info": false, 00:11:22.755 "zone_management": false, 00:11:22.755 "zone_append": false, 00:11:22.755 "compare": false, 00:11:22.755 "compare_and_write": false, 00:11:22.755 "abort": true, 00:11:22.755 "seek_hole": false, 00:11:22.755 "seek_data": false, 00:11:22.755 "copy": true, 00:11:22.755 "nvme_iov_md": false 00:11:22.755 }, 00:11:22.755 "memory_domains": [ 00:11:22.755 { 00:11:22.755 "dma_device_id": "system", 00:11:22.755 "dma_device_type": 1 00:11:22.755 }, 00:11:22.755 { 00:11:22.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.755 "dma_device_type": 2 00:11:22.755 } 00:11:22.755 ], 00:11:22.755 "driver_specific": {} 00:11:22.755 } 00:11:22.755 ] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 BaseBdev4 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.755 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.756 [ 00:11:22.756 { 00:11:22.756 "name": "BaseBdev4", 00:11:22.756 "aliases": [ 00:11:22.756 "92b6cfb2-5def-423c-af04-50e576265562" 00:11:22.756 ], 00:11:22.756 "product_name": "Malloc disk", 00:11:22.756 "block_size": 512, 00:11:22.756 "num_blocks": 65536, 00:11:22.756 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:22.756 "assigned_rate_limits": { 00:11:22.756 "rw_ios_per_sec": 0, 00:11:22.756 "rw_mbytes_per_sec": 0, 00:11:22.756 "r_mbytes_per_sec": 0, 00:11:22.756 "w_mbytes_per_sec": 0 00:11:22.756 }, 00:11:22.756 "claimed": false, 00:11:22.756 "zoned": false, 00:11:22.756 "supported_io_types": { 00:11:22.756 "read": true, 00:11:22.756 "write": true, 00:11:22.756 "unmap": true, 00:11:22.756 "flush": true, 00:11:22.756 "reset": true, 00:11:22.756 "nvme_admin": false, 00:11:22.756 "nvme_io": false, 00:11:22.756 "nvme_io_md": false, 00:11:22.756 "write_zeroes": true, 00:11:22.756 "zcopy": true, 00:11:22.756 "get_zone_info": false, 00:11:22.756 "zone_management": false, 00:11:22.756 "zone_append": false, 00:11:22.756 "compare": false, 00:11:22.756 "compare_and_write": false, 00:11:22.756 "abort": true, 00:11:22.756 "seek_hole": false, 00:11:22.756 "seek_data": false, 00:11:22.756 "copy": true, 00:11:22.756 "nvme_iov_md": false 00:11:22.756 }, 00:11:22.756 "memory_domains": [ 00:11:22.756 { 00:11:22.756 "dma_device_id": "system", 00:11:22.756 "dma_device_type": 1 00:11:22.756 }, 00:11:22.756 { 00:11:22.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.756 "dma_device_type": 2 00:11:22.756 } 00:11:22.756 ], 00:11:22.756 "driver_specific": {} 00:11:22.756 } 00:11:22.756 ] 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.756 [2024-11-20 14:28:23.746274] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.756 [2024-11-20 14:28:23.746456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.756 [2024-11-20 14:28:23.746506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.756 [2024-11-20 14:28:23.748985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.756 [2024-11-20 14:28:23.749059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.756 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.015 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.015 "name": "Existed_Raid", 00:11:23.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.015 "strip_size_kb": 64, 00:11:23.015 "state": "configuring", 00:11:23.015 "raid_level": "raid0", 00:11:23.015 "superblock": false, 00:11:23.015 "num_base_bdevs": 4, 00:11:23.015 "num_base_bdevs_discovered": 3, 00:11:23.015 "num_base_bdevs_operational": 4, 00:11:23.015 "base_bdevs_list": [ 00:11:23.015 { 00:11:23.015 "name": "BaseBdev1", 00:11:23.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.015 "is_configured": false, 00:11:23.015 "data_offset": 0, 00:11:23.015 "data_size": 0 00:11:23.015 }, 00:11:23.015 { 00:11:23.015 "name": "BaseBdev2", 00:11:23.015 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:23.015 "is_configured": true, 00:11:23.015 "data_offset": 0, 00:11:23.015 "data_size": 65536 00:11:23.015 }, 00:11:23.015 { 00:11:23.015 "name": "BaseBdev3", 00:11:23.015 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:23.015 "is_configured": true, 00:11:23.015 "data_offset": 0, 00:11:23.015 "data_size": 65536 00:11:23.015 }, 00:11:23.015 { 00:11:23.015 "name": "BaseBdev4", 00:11:23.015 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:23.015 "is_configured": true, 00:11:23.015 "data_offset": 0, 00:11:23.015 "data_size": 65536 00:11:23.015 } 00:11:23.015 ] 00:11:23.015 }' 00:11:23.015 14:28:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.015 14:28:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.274 [2024-11-20 14:28:24.238444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.274 "name": "Existed_Raid", 00:11:23.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.274 "strip_size_kb": 64, 00:11:23.274 "state": "configuring", 00:11:23.274 "raid_level": "raid0", 00:11:23.274 "superblock": false, 00:11:23.274 "num_base_bdevs": 4, 00:11:23.274 "num_base_bdevs_discovered": 2, 00:11:23.274 "num_base_bdevs_operational": 4, 00:11:23.274 "base_bdevs_list": [ 00:11:23.274 { 00:11:23.274 "name": "BaseBdev1", 00:11:23.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.274 "is_configured": false, 00:11:23.274 "data_offset": 0, 00:11:23.274 "data_size": 0 00:11:23.274 }, 00:11:23.274 { 00:11:23.274 "name": null, 00:11:23.274 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:23.274 "is_configured": false, 00:11:23.274 "data_offset": 0, 00:11:23.274 "data_size": 65536 00:11:23.274 }, 00:11:23.274 { 00:11:23.274 "name": "BaseBdev3", 00:11:23.274 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:23.274 "is_configured": true, 00:11:23.274 "data_offset": 0, 00:11:23.274 "data_size": 65536 00:11:23.274 }, 00:11:23.274 { 00:11:23.274 "name": "BaseBdev4", 00:11:23.274 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:23.274 "is_configured": true, 00:11:23.274 "data_offset": 0, 00:11:23.274 "data_size": 65536 00:11:23.274 } 00:11:23.274 ] 00:11:23.274 }' 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.274 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.840 [2024-11-20 14:28:24.848586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.840 BaseBdev1 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.840 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.841 [ 00:11:23.841 { 00:11:23.841 "name": "BaseBdev1", 00:11:23.841 "aliases": [ 00:11:23.841 "2503dc51-39ec-405d-bac8-7d325983a32e" 00:11:23.841 ], 00:11:23.841 "product_name": "Malloc disk", 00:11:23.841 "block_size": 512, 00:11:23.841 "num_blocks": 65536, 00:11:23.841 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:23.841 "assigned_rate_limits": { 00:11:23.841 "rw_ios_per_sec": 0, 00:11:23.841 "rw_mbytes_per_sec": 0, 00:11:23.841 "r_mbytes_per_sec": 0, 00:11:23.841 "w_mbytes_per_sec": 0 00:11:23.841 }, 00:11:23.841 "claimed": true, 00:11:23.841 "claim_type": "exclusive_write", 00:11:23.841 "zoned": false, 00:11:23.841 "supported_io_types": { 00:11:23.841 "read": true, 00:11:23.841 "write": true, 00:11:23.841 "unmap": true, 00:11:23.841 "flush": true, 00:11:23.841 "reset": true, 00:11:23.841 "nvme_admin": false, 00:11:23.841 "nvme_io": false, 00:11:23.841 "nvme_io_md": false, 00:11:23.841 "write_zeroes": true, 00:11:23.841 "zcopy": true, 00:11:23.841 "get_zone_info": false, 00:11:23.841 "zone_management": false, 00:11:23.841 "zone_append": false, 00:11:23.841 "compare": false, 00:11:23.841 "compare_and_write": false, 00:11:23.841 "abort": true, 00:11:23.841 "seek_hole": false, 00:11:23.841 "seek_data": false, 00:11:23.841 "copy": true, 00:11:23.841 "nvme_iov_md": false 00:11:23.841 }, 00:11:23.841 "memory_domains": [ 00:11:23.841 { 00:11:23.841 "dma_device_id": "system", 00:11:23.841 "dma_device_type": 1 00:11:23.841 }, 00:11:23.841 { 00:11:23.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.841 "dma_device_type": 2 00:11:23.841 } 00:11:23.841 ], 00:11:23.841 "driver_specific": {} 00:11:23.841 } 00:11:23.841 ] 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.841 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.099 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.099 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.099 "name": "Existed_Raid", 00:11:24.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.099 "strip_size_kb": 64, 00:11:24.099 "state": "configuring", 00:11:24.099 "raid_level": "raid0", 00:11:24.099 "superblock": false, 00:11:24.099 "num_base_bdevs": 4, 00:11:24.099 "num_base_bdevs_discovered": 3, 00:11:24.099 "num_base_bdevs_operational": 4, 00:11:24.099 "base_bdevs_list": [ 00:11:24.099 { 00:11:24.099 "name": "BaseBdev1", 00:11:24.099 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:24.099 "is_configured": true, 00:11:24.099 "data_offset": 0, 00:11:24.099 "data_size": 65536 00:11:24.099 }, 00:11:24.099 { 00:11:24.099 "name": null, 00:11:24.099 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:24.099 "is_configured": false, 00:11:24.099 "data_offset": 0, 00:11:24.099 "data_size": 65536 00:11:24.099 }, 00:11:24.099 { 00:11:24.099 "name": "BaseBdev3", 00:11:24.099 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:24.099 "is_configured": true, 00:11:24.099 "data_offset": 0, 00:11:24.099 "data_size": 65536 00:11:24.099 }, 00:11:24.099 { 00:11:24.099 "name": "BaseBdev4", 00:11:24.099 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:24.099 "is_configured": true, 00:11:24.099 "data_offset": 0, 00:11:24.099 "data_size": 65536 00:11:24.099 } 00:11:24.099 ] 00:11:24.099 }' 00:11:24.099 14:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.099 14:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.359 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.359 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.359 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.359 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.359 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.618 [2024-11-20 14:28:25.444953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.618 "name": "Existed_Raid", 00:11:24.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.618 "strip_size_kb": 64, 00:11:24.618 "state": "configuring", 00:11:24.618 "raid_level": "raid0", 00:11:24.618 "superblock": false, 00:11:24.618 "num_base_bdevs": 4, 00:11:24.618 "num_base_bdevs_discovered": 2, 00:11:24.618 "num_base_bdevs_operational": 4, 00:11:24.618 "base_bdevs_list": [ 00:11:24.618 { 00:11:24.618 "name": "BaseBdev1", 00:11:24.618 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:24.618 "is_configured": true, 00:11:24.618 "data_offset": 0, 00:11:24.618 "data_size": 65536 00:11:24.618 }, 00:11:24.618 { 00:11:24.618 "name": null, 00:11:24.618 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:24.618 "is_configured": false, 00:11:24.618 "data_offset": 0, 00:11:24.618 "data_size": 65536 00:11:24.618 }, 00:11:24.618 { 00:11:24.618 "name": null, 00:11:24.618 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:24.618 "is_configured": false, 00:11:24.618 "data_offset": 0, 00:11:24.618 "data_size": 65536 00:11:24.618 }, 00:11:24.618 { 00:11:24.618 "name": "BaseBdev4", 00:11:24.618 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:24.618 "is_configured": true, 00:11:24.618 "data_offset": 0, 00:11:24.618 "data_size": 65536 00:11:24.618 } 00:11:24.618 ] 00:11:24.618 }' 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.618 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.184 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.185 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.185 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.185 14:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:25.185 14:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.185 [2024-11-20 14:28:26.049043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.185 "name": "Existed_Raid", 00:11:25.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.185 "strip_size_kb": 64, 00:11:25.185 "state": "configuring", 00:11:25.185 "raid_level": "raid0", 00:11:25.185 "superblock": false, 00:11:25.185 "num_base_bdevs": 4, 00:11:25.185 "num_base_bdevs_discovered": 3, 00:11:25.185 "num_base_bdevs_operational": 4, 00:11:25.185 "base_bdevs_list": [ 00:11:25.185 { 00:11:25.185 "name": "BaseBdev1", 00:11:25.185 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:25.185 "is_configured": true, 00:11:25.185 "data_offset": 0, 00:11:25.185 "data_size": 65536 00:11:25.185 }, 00:11:25.185 { 00:11:25.185 "name": null, 00:11:25.185 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:25.185 "is_configured": false, 00:11:25.185 "data_offset": 0, 00:11:25.185 "data_size": 65536 00:11:25.185 }, 00:11:25.185 { 00:11:25.185 "name": "BaseBdev3", 00:11:25.185 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:25.185 "is_configured": true, 00:11:25.185 "data_offset": 0, 00:11:25.185 "data_size": 65536 00:11:25.185 }, 00:11:25.185 { 00:11:25.185 "name": "BaseBdev4", 00:11:25.185 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:25.185 "is_configured": true, 00:11:25.185 "data_offset": 0, 00:11:25.185 "data_size": 65536 00:11:25.185 } 00:11:25.185 ] 00:11:25.185 }' 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.185 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.751 [2024-11-20 14:28:26.613258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.751 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.751 "name": "Existed_Raid", 00:11:25.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.751 "strip_size_kb": 64, 00:11:25.751 "state": "configuring", 00:11:25.751 "raid_level": "raid0", 00:11:25.751 "superblock": false, 00:11:25.751 "num_base_bdevs": 4, 00:11:25.751 "num_base_bdevs_discovered": 2, 00:11:25.751 "num_base_bdevs_operational": 4, 00:11:25.751 "base_bdevs_list": [ 00:11:25.751 { 00:11:25.751 "name": null, 00:11:25.751 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:25.751 "is_configured": false, 00:11:25.751 "data_offset": 0, 00:11:25.751 "data_size": 65536 00:11:25.751 }, 00:11:25.751 { 00:11:25.752 "name": null, 00:11:25.752 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:25.752 "is_configured": false, 00:11:25.752 "data_offset": 0, 00:11:25.752 "data_size": 65536 00:11:25.752 }, 00:11:25.752 { 00:11:25.752 "name": "BaseBdev3", 00:11:25.752 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:25.752 "is_configured": true, 00:11:25.752 "data_offset": 0, 00:11:25.752 "data_size": 65536 00:11:25.752 }, 00:11:25.752 { 00:11:25.752 "name": "BaseBdev4", 00:11:25.752 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:25.752 "is_configured": true, 00:11:25.752 "data_offset": 0, 00:11:25.752 "data_size": 65536 00:11:25.752 } 00:11:25.752 ] 00:11:25.752 }' 00:11:25.752 14:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.752 14:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.317 [2024-11-20 14:28:27.231780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.317 "name": "Existed_Raid", 00:11:26.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.317 "strip_size_kb": 64, 00:11:26.317 "state": "configuring", 00:11:26.317 "raid_level": "raid0", 00:11:26.317 "superblock": false, 00:11:26.317 "num_base_bdevs": 4, 00:11:26.317 "num_base_bdevs_discovered": 3, 00:11:26.317 "num_base_bdevs_operational": 4, 00:11:26.317 "base_bdevs_list": [ 00:11:26.317 { 00:11:26.317 "name": null, 00:11:26.317 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:26.317 "is_configured": false, 00:11:26.317 "data_offset": 0, 00:11:26.317 "data_size": 65536 00:11:26.317 }, 00:11:26.317 { 00:11:26.317 "name": "BaseBdev2", 00:11:26.317 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:26.317 "is_configured": true, 00:11:26.317 "data_offset": 0, 00:11:26.317 "data_size": 65536 00:11:26.317 }, 00:11:26.317 { 00:11:26.317 "name": "BaseBdev3", 00:11:26.317 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:26.317 "is_configured": true, 00:11:26.317 "data_offset": 0, 00:11:26.317 "data_size": 65536 00:11:26.317 }, 00:11:26.317 { 00:11:26.317 "name": "BaseBdev4", 00:11:26.317 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:26.317 "is_configured": true, 00:11:26.317 "data_offset": 0, 00:11:26.317 "data_size": 65536 00:11:26.317 } 00:11:26.317 ] 00:11:26.317 }' 00:11:26.317 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.318 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2503dc51-39ec-405d-bac8-7d325983a32e 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.885 [2024-11-20 14:28:27.846916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:26.885 [2024-11-20 14:28:27.847006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:26.885 [2024-11-20 14:28:27.847021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:26.885 [2024-11-20 14:28:27.847370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:26.885 [2024-11-20 14:28:27.847559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:26.885 [2024-11-20 14:28:27.847580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:26.885 [2024-11-20 14:28:27.847943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.885 NewBaseBdev 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.885 [ 00:11:26.885 { 00:11:26.885 "name": "NewBaseBdev", 00:11:26.885 "aliases": [ 00:11:26.885 "2503dc51-39ec-405d-bac8-7d325983a32e" 00:11:26.885 ], 00:11:26.885 "product_name": "Malloc disk", 00:11:26.885 "block_size": 512, 00:11:26.885 "num_blocks": 65536, 00:11:26.885 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:26.885 "assigned_rate_limits": { 00:11:26.885 "rw_ios_per_sec": 0, 00:11:26.885 "rw_mbytes_per_sec": 0, 00:11:26.885 "r_mbytes_per_sec": 0, 00:11:26.885 "w_mbytes_per_sec": 0 00:11:26.885 }, 00:11:26.885 "claimed": true, 00:11:26.885 "claim_type": "exclusive_write", 00:11:26.885 "zoned": false, 00:11:26.885 "supported_io_types": { 00:11:26.885 "read": true, 00:11:26.885 "write": true, 00:11:26.885 "unmap": true, 00:11:26.885 "flush": true, 00:11:26.885 "reset": true, 00:11:26.885 "nvme_admin": false, 00:11:26.885 "nvme_io": false, 00:11:26.885 "nvme_io_md": false, 00:11:26.885 "write_zeroes": true, 00:11:26.885 "zcopy": true, 00:11:26.885 "get_zone_info": false, 00:11:26.885 "zone_management": false, 00:11:26.885 "zone_append": false, 00:11:26.885 "compare": false, 00:11:26.885 "compare_and_write": false, 00:11:26.885 "abort": true, 00:11:26.885 "seek_hole": false, 00:11:26.885 "seek_data": false, 00:11:26.885 "copy": true, 00:11:26.885 "nvme_iov_md": false 00:11:26.885 }, 00:11:26.885 "memory_domains": [ 00:11:26.885 { 00:11:26.885 "dma_device_id": "system", 00:11:26.885 "dma_device_type": 1 00:11:26.885 }, 00:11:26.885 { 00:11:26.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.885 "dma_device_type": 2 00:11:26.885 } 00:11:26.885 ], 00:11:26.885 "driver_specific": {} 00:11:26.885 } 00:11:26.885 ] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.885 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.886 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.886 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.886 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.886 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.886 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.886 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.886 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.144 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.144 "name": "Existed_Raid", 00:11:27.144 "uuid": "4cedc055-e57a-4322-99df-401d20951ae3", 00:11:27.144 "strip_size_kb": 64, 00:11:27.144 "state": "online", 00:11:27.144 "raid_level": "raid0", 00:11:27.144 "superblock": false, 00:11:27.144 "num_base_bdevs": 4, 00:11:27.144 "num_base_bdevs_discovered": 4, 00:11:27.144 "num_base_bdevs_operational": 4, 00:11:27.144 "base_bdevs_list": [ 00:11:27.144 { 00:11:27.144 "name": "NewBaseBdev", 00:11:27.144 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:27.144 "is_configured": true, 00:11:27.144 "data_offset": 0, 00:11:27.144 "data_size": 65536 00:11:27.144 }, 00:11:27.144 { 00:11:27.144 "name": "BaseBdev2", 00:11:27.144 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:27.144 "is_configured": true, 00:11:27.144 "data_offset": 0, 00:11:27.144 "data_size": 65536 00:11:27.144 }, 00:11:27.144 { 00:11:27.144 "name": "BaseBdev3", 00:11:27.144 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:27.144 "is_configured": true, 00:11:27.144 "data_offset": 0, 00:11:27.144 "data_size": 65536 00:11:27.144 }, 00:11:27.144 { 00:11:27.144 "name": "BaseBdev4", 00:11:27.144 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:27.144 "is_configured": true, 00:11:27.144 "data_offset": 0, 00:11:27.144 "data_size": 65536 00:11:27.144 } 00:11:27.144 ] 00:11:27.144 }' 00:11:27.144 14:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.144 14:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.402 [2024-11-20 14:28:28.423645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.402 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.676 "name": "Existed_Raid", 00:11:27.676 "aliases": [ 00:11:27.676 "4cedc055-e57a-4322-99df-401d20951ae3" 00:11:27.676 ], 00:11:27.676 "product_name": "Raid Volume", 00:11:27.676 "block_size": 512, 00:11:27.676 "num_blocks": 262144, 00:11:27.676 "uuid": "4cedc055-e57a-4322-99df-401d20951ae3", 00:11:27.676 "assigned_rate_limits": { 00:11:27.676 "rw_ios_per_sec": 0, 00:11:27.676 "rw_mbytes_per_sec": 0, 00:11:27.676 "r_mbytes_per_sec": 0, 00:11:27.676 "w_mbytes_per_sec": 0 00:11:27.676 }, 00:11:27.676 "claimed": false, 00:11:27.676 "zoned": false, 00:11:27.676 "supported_io_types": { 00:11:27.676 "read": true, 00:11:27.676 "write": true, 00:11:27.676 "unmap": true, 00:11:27.676 "flush": true, 00:11:27.676 "reset": true, 00:11:27.676 "nvme_admin": false, 00:11:27.676 "nvme_io": false, 00:11:27.676 "nvme_io_md": false, 00:11:27.676 "write_zeroes": true, 00:11:27.676 "zcopy": false, 00:11:27.676 "get_zone_info": false, 00:11:27.676 "zone_management": false, 00:11:27.676 "zone_append": false, 00:11:27.676 "compare": false, 00:11:27.676 "compare_and_write": false, 00:11:27.676 "abort": false, 00:11:27.676 "seek_hole": false, 00:11:27.676 "seek_data": false, 00:11:27.676 "copy": false, 00:11:27.676 "nvme_iov_md": false 00:11:27.676 }, 00:11:27.676 "memory_domains": [ 00:11:27.676 { 00:11:27.676 "dma_device_id": "system", 00:11:27.676 "dma_device_type": 1 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.676 "dma_device_type": 2 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "dma_device_id": "system", 00:11:27.676 "dma_device_type": 1 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.676 "dma_device_type": 2 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "dma_device_id": "system", 00:11:27.676 "dma_device_type": 1 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.676 "dma_device_type": 2 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "dma_device_id": "system", 00:11:27.676 "dma_device_type": 1 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.676 "dma_device_type": 2 00:11:27.676 } 00:11:27.676 ], 00:11:27.676 "driver_specific": { 00:11:27.676 "raid": { 00:11:27.676 "uuid": "4cedc055-e57a-4322-99df-401d20951ae3", 00:11:27.676 "strip_size_kb": 64, 00:11:27.676 "state": "online", 00:11:27.676 "raid_level": "raid0", 00:11:27.676 "superblock": false, 00:11:27.676 "num_base_bdevs": 4, 00:11:27.676 "num_base_bdevs_discovered": 4, 00:11:27.676 "num_base_bdevs_operational": 4, 00:11:27.676 "base_bdevs_list": [ 00:11:27.676 { 00:11:27.676 "name": "NewBaseBdev", 00:11:27.676 "uuid": "2503dc51-39ec-405d-bac8-7d325983a32e", 00:11:27.676 "is_configured": true, 00:11:27.676 "data_offset": 0, 00:11:27.676 "data_size": 65536 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "name": "BaseBdev2", 00:11:27.676 "uuid": "c1f97cc3-9f45-4f98-ac37-cc893737aedf", 00:11:27.676 "is_configured": true, 00:11:27.676 "data_offset": 0, 00:11:27.676 "data_size": 65536 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "name": "BaseBdev3", 00:11:27.676 "uuid": "f77cfddf-d284-449a-8ccb-db8f07631dd8", 00:11:27.676 "is_configured": true, 00:11:27.676 "data_offset": 0, 00:11:27.676 "data_size": 65536 00:11:27.676 }, 00:11:27.676 { 00:11:27.676 "name": "BaseBdev4", 00:11:27.676 "uuid": "92b6cfb2-5def-423c-af04-50e576265562", 00:11:27.676 "is_configured": true, 00:11:27.676 "data_offset": 0, 00:11:27.676 "data_size": 65536 00:11:27.676 } 00:11:27.676 ] 00:11:27.676 } 00:11:27.676 } 00:11:27.676 }' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:27.676 BaseBdev2 00:11:27.676 BaseBdev3 00:11:27.676 BaseBdev4' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.676 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.949 [2024-11-20 14:28:28.819317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.949 [2024-11-20 14:28:28.819368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.949 [2024-11-20 14:28:28.819518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.949 [2024-11-20 14:28:28.819678] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.949 [2024-11-20 14:28:28.819708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69543 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69543 ']' 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69543 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69543 00:11:27.949 killing process with pid 69543 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69543' 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69543 00:11:27.949 [2024-11-20 14:28:28.859968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.949 14:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69543 00:11:28.516 [2024-11-20 14:28:29.282136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.450 ************************************ 00:11:29.450 END TEST raid_state_function_test 00:11:29.450 ************************************ 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:29.450 00:11:29.450 real 0m12.891s 00:11:29.450 user 0m21.171s 00:11:29.450 sys 0m1.775s 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.450 14:28:30 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:29.450 14:28:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:29.450 14:28:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.450 14:28:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.450 ************************************ 00:11:29.450 START TEST raid_state_function_test_sb 00:11:29.450 ************************************ 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70231 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:29.450 Process raid pid: 70231 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70231' 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70231 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70231 ']' 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.450 14:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.708 [2024-11-20 14:28:30.505698] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:29.708 [2024-11-20 14:28:30.505880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.708 [2024-11-20 14:28:30.687149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.967 [2024-11-20 14:28:30.840272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.225 [2024-11-20 14:28:31.060254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.225 [2024-11-20 14:28:31.060323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.791 [2024-11-20 14:28:31.661148] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.791 [2024-11-20 14:28:31.661272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.791 [2024-11-20 14:28:31.661300] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.791 [2024-11-20 14:28:31.661336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.791 [2024-11-20 14:28:31.661358] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.791 [2024-11-20 14:28:31.661385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.791 [2024-11-20 14:28:31.661402] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:30.791 [2024-11-20 14:28:31.661431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.791 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.791 "name": "Existed_Raid", 00:11:30.791 "uuid": "c255e16b-baf5-47c2-8c9b-3b40dd87b3ff", 00:11:30.791 "strip_size_kb": 64, 00:11:30.791 "state": "configuring", 00:11:30.791 "raid_level": "raid0", 00:11:30.791 "superblock": true, 00:11:30.791 "num_base_bdevs": 4, 00:11:30.791 "num_base_bdevs_discovered": 0, 00:11:30.791 "num_base_bdevs_operational": 4, 00:11:30.792 "base_bdevs_list": [ 00:11:30.792 { 00:11:30.792 "name": "BaseBdev1", 00:11:30.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.792 "is_configured": false, 00:11:30.792 "data_offset": 0, 00:11:30.792 "data_size": 0 00:11:30.792 }, 00:11:30.792 { 00:11:30.792 "name": "BaseBdev2", 00:11:30.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.792 "is_configured": false, 00:11:30.792 "data_offset": 0, 00:11:30.792 "data_size": 0 00:11:30.792 }, 00:11:30.792 { 00:11:30.792 "name": "BaseBdev3", 00:11:30.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.792 "is_configured": false, 00:11:30.792 "data_offset": 0, 00:11:30.792 "data_size": 0 00:11:30.792 }, 00:11:30.792 { 00:11:30.792 "name": "BaseBdev4", 00:11:30.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.792 "is_configured": false, 00:11:30.792 "data_offset": 0, 00:11:30.792 "data_size": 0 00:11:30.792 } 00:11:30.792 ] 00:11:30.792 }' 00:11:30.792 14:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.792 14:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.359 [2024-11-20 14:28:32.245171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.359 [2024-11-20 14:28:32.245255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.359 [2024-11-20 14:28:32.253126] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.359 [2024-11-20 14:28:32.253194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.359 [2024-11-20 14:28:32.253219] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.359 [2024-11-20 14:28:32.253247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.359 [2024-11-20 14:28:32.253264] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.359 [2024-11-20 14:28:32.253290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.359 [2024-11-20 14:28:32.253302] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.359 [2024-11-20 14:28:32.253318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.359 [2024-11-20 14:28:32.307671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.359 BaseBdev1 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.359 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.360 [ 00:11:31.360 { 00:11:31.360 "name": "BaseBdev1", 00:11:31.360 "aliases": [ 00:11:31.360 "80064a79-825d-4cd6-9739-8998e9319fe1" 00:11:31.360 ], 00:11:31.360 "product_name": "Malloc disk", 00:11:31.360 "block_size": 512, 00:11:31.360 "num_blocks": 65536, 00:11:31.360 "uuid": "80064a79-825d-4cd6-9739-8998e9319fe1", 00:11:31.360 "assigned_rate_limits": { 00:11:31.360 "rw_ios_per_sec": 0, 00:11:31.360 "rw_mbytes_per_sec": 0, 00:11:31.360 "r_mbytes_per_sec": 0, 00:11:31.360 "w_mbytes_per_sec": 0 00:11:31.360 }, 00:11:31.360 "claimed": true, 00:11:31.360 "claim_type": "exclusive_write", 00:11:31.360 "zoned": false, 00:11:31.360 "supported_io_types": { 00:11:31.360 "read": true, 00:11:31.360 "write": true, 00:11:31.360 "unmap": true, 00:11:31.360 "flush": true, 00:11:31.360 "reset": true, 00:11:31.360 "nvme_admin": false, 00:11:31.360 "nvme_io": false, 00:11:31.360 "nvme_io_md": false, 00:11:31.360 "write_zeroes": true, 00:11:31.360 "zcopy": true, 00:11:31.360 "get_zone_info": false, 00:11:31.360 "zone_management": false, 00:11:31.360 "zone_append": false, 00:11:31.360 "compare": false, 00:11:31.360 "compare_and_write": false, 00:11:31.360 "abort": true, 00:11:31.360 "seek_hole": false, 00:11:31.360 "seek_data": false, 00:11:31.360 "copy": true, 00:11:31.360 "nvme_iov_md": false 00:11:31.360 }, 00:11:31.360 "memory_domains": [ 00:11:31.360 { 00:11:31.360 "dma_device_id": "system", 00:11:31.360 "dma_device_type": 1 00:11:31.360 }, 00:11:31.360 { 00:11:31.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.360 "dma_device_type": 2 00:11:31.360 } 00:11:31.360 ], 00:11:31.360 "driver_specific": {} 00:11:31.360 } 00:11:31.360 ] 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.360 "name": "Existed_Raid", 00:11:31.360 "uuid": "eaec1e4f-c7e4-40b0-bbca-89fc2ceead94", 00:11:31.360 "strip_size_kb": 64, 00:11:31.360 "state": "configuring", 00:11:31.360 "raid_level": "raid0", 00:11:31.360 "superblock": true, 00:11:31.360 "num_base_bdevs": 4, 00:11:31.360 "num_base_bdevs_discovered": 1, 00:11:31.360 "num_base_bdevs_operational": 4, 00:11:31.360 "base_bdevs_list": [ 00:11:31.360 { 00:11:31.360 "name": "BaseBdev1", 00:11:31.360 "uuid": "80064a79-825d-4cd6-9739-8998e9319fe1", 00:11:31.360 "is_configured": true, 00:11:31.360 "data_offset": 2048, 00:11:31.360 "data_size": 63488 00:11:31.360 }, 00:11:31.360 { 00:11:31.360 "name": "BaseBdev2", 00:11:31.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.360 "is_configured": false, 00:11:31.360 "data_offset": 0, 00:11:31.360 "data_size": 0 00:11:31.360 }, 00:11:31.360 { 00:11:31.360 "name": "BaseBdev3", 00:11:31.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.360 "is_configured": false, 00:11:31.360 "data_offset": 0, 00:11:31.360 "data_size": 0 00:11:31.360 }, 00:11:31.360 { 00:11:31.360 "name": "BaseBdev4", 00:11:31.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.360 "is_configured": false, 00:11:31.360 "data_offset": 0, 00:11:31.360 "data_size": 0 00:11:31.360 } 00:11:31.360 ] 00:11:31.360 }' 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.360 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.926 [2024-11-20 14:28:32.867799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.926 [2024-11-20 14:28:32.867875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.926 [2024-11-20 14:28:32.875852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.926 [2024-11-20 14:28:32.878370] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.926 [2024-11-20 14:28:32.878428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.926 [2024-11-20 14:28:32.878445] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.926 [2024-11-20 14:28:32.878464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.926 [2024-11-20 14:28:32.878474] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.926 [2024-11-20 14:28:32.878488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.926 "name": "Existed_Raid", 00:11:31.926 "uuid": "4bf35644-bee2-4e2e-b8cb-84222febae82", 00:11:31.926 "strip_size_kb": 64, 00:11:31.926 "state": "configuring", 00:11:31.926 "raid_level": "raid0", 00:11:31.926 "superblock": true, 00:11:31.926 "num_base_bdevs": 4, 00:11:31.926 "num_base_bdevs_discovered": 1, 00:11:31.926 "num_base_bdevs_operational": 4, 00:11:31.926 "base_bdevs_list": [ 00:11:31.926 { 00:11:31.926 "name": "BaseBdev1", 00:11:31.926 "uuid": "80064a79-825d-4cd6-9739-8998e9319fe1", 00:11:31.926 "is_configured": true, 00:11:31.926 "data_offset": 2048, 00:11:31.926 "data_size": 63488 00:11:31.926 }, 00:11:31.926 { 00:11:31.926 "name": "BaseBdev2", 00:11:31.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.926 "is_configured": false, 00:11:31.926 "data_offset": 0, 00:11:31.926 "data_size": 0 00:11:31.926 }, 00:11:31.926 { 00:11:31.926 "name": "BaseBdev3", 00:11:31.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.926 "is_configured": false, 00:11:31.926 "data_offset": 0, 00:11:31.926 "data_size": 0 00:11:31.926 }, 00:11:31.926 { 00:11:31.926 "name": "BaseBdev4", 00:11:31.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.926 "is_configured": false, 00:11:31.926 "data_offset": 0, 00:11:31.926 "data_size": 0 00:11:31.926 } 00:11:31.926 ] 00:11:31.926 }' 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.926 14:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.564 [2024-11-20 14:28:33.431313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.564 BaseBdev2 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.564 [ 00:11:32.564 { 00:11:32.564 "name": "BaseBdev2", 00:11:32.564 "aliases": [ 00:11:32.564 "04c873ae-6c81-4dd8-bbbb-2d5858ac12f3" 00:11:32.564 ], 00:11:32.564 "product_name": "Malloc disk", 00:11:32.564 "block_size": 512, 00:11:32.564 "num_blocks": 65536, 00:11:32.564 "uuid": "04c873ae-6c81-4dd8-bbbb-2d5858ac12f3", 00:11:32.564 "assigned_rate_limits": { 00:11:32.564 "rw_ios_per_sec": 0, 00:11:32.564 "rw_mbytes_per_sec": 0, 00:11:32.564 "r_mbytes_per_sec": 0, 00:11:32.564 "w_mbytes_per_sec": 0 00:11:32.564 }, 00:11:32.564 "claimed": true, 00:11:32.564 "claim_type": "exclusive_write", 00:11:32.564 "zoned": false, 00:11:32.564 "supported_io_types": { 00:11:32.564 "read": true, 00:11:32.564 "write": true, 00:11:32.564 "unmap": true, 00:11:32.564 "flush": true, 00:11:32.564 "reset": true, 00:11:32.564 "nvme_admin": false, 00:11:32.564 "nvme_io": false, 00:11:32.564 "nvme_io_md": false, 00:11:32.564 "write_zeroes": true, 00:11:32.564 "zcopy": true, 00:11:32.564 "get_zone_info": false, 00:11:32.564 "zone_management": false, 00:11:32.564 "zone_append": false, 00:11:32.564 "compare": false, 00:11:32.564 "compare_and_write": false, 00:11:32.564 "abort": true, 00:11:32.564 "seek_hole": false, 00:11:32.564 "seek_data": false, 00:11:32.564 "copy": true, 00:11:32.564 "nvme_iov_md": false 00:11:32.564 }, 00:11:32.564 "memory_domains": [ 00:11:32.564 { 00:11:32.564 "dma_device_id": "system", 00:11:32.564 "dma_device_type": 1 00:11:32.564 }, 00:11:32.564 { 00:11:32.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.564 "dma_device_type": 2 00:11:32.564 } 00:11:32.564 ], 00:11:32.564 "driver_specific": {} 00:11:32.564 } 00:11:32.564 ] 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.564 "name": "Existed_Raid", 00:11:32.564 "uuid": "4bf35644-bee2-4e2e-b8cb-84222febae82", 00:11:32.564 "strip_size_kb": 64, 00:11:32.564 "state": "configuring", 00:11:32.564 "raid_level": "raid0", 00:11:32.564 "superblock": true, 00:11:32.564 "num_base_bdevs": 4, 00:11:32.564 "num_base_bdevs_discovered": 2, 00:11:32.564 "num_base_bdevs_operational": 4, 00:11:32.564 "base_bdevs_list": [ 00:11:32.564 { 00:11:32.564 "name": "BaseBdev1", 00:11:32.564 "uuid": "80064a79-825d-4cd6-9739-8998e9319fe1", 00:11:32.564 "is_configured": true, 00:11:32.564 "data_offset": 2048, 00:11:32.564 "data_size": 63488 00:11:32.564 }, 00:11:32.564 { 00:11:32.564 "name": "BaseBdev2", 00:11:32.564 "uuid": "04c873ae-6c81-4dd8-bbbb-2d5858ac12f3", 00:11:32.564 "is_configured": true, 00:11:32.564 "data_offset": 2048, 00:11:32.564 "data_size": 63488 00:11:32.564 }, 00:11:32.564 { 00:11:32.564 "name": "BaseBdev3", 00:11:32.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.564 "is_configured": false, 00:11:32.564 "data_offset": 0, 00:11:32.564 "data_size": 0 00:11:32.564 }, 00:11:32.564 { 00:11:32.564 "name": "BaseBdev4", 00:11:32.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.564 "is_configured": false, 00:11:32.564 "data_offset": 0, 00:11:32.564 "data_size": 0 00:11:32.564 } 00:11:32.564 ] 00:11:32.564 }' 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.564 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.130 [2024-11-20 14:28:33.978864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.130 BaseBdev3 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.130 14:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.130 [ 00:11:33.130 { 00:11:33.130 "name": "BaseBdev3", 00:11:33.130 "aliases": [ 00:11:33.130 "40445d78-1d4d-4a87-adec-ec7bae9c5103" 00:11:33.130 ], 00:11:33.130 "product_name": "Malloc disk", 00:11:33.130 "block_size": 512, 00:11:33.130 "num_blocks": 65536, 00:11:33.130 "uuid": "40445d78-1d4d-4a87-adec-ec7bae9c5103", 00:11:33.130 "assigned_rate_limits": { 00:11:33.130 "rw_ios_per_sec": 0, 00:11:33.130 "rw_mbytes_per_sec": 0, 00:11:33.130 "r_mbytes_per_sec": 0, 00:11:33.130 "w_mbytes_per_sec": 0 00:11:33.130 }, 00:11:33.130 "claimed": true, 00:11:33.130 "claim_type": "exclusive_write", 00:11:33.130 "zoned": false, 00:11:33.130 "supported_io_types": { 00:11:33.130 "read": true, 00:11:33.130 "write": true, 00:11:33.130 "unmap": true, 00:11:33.130 "flush": true, 00:11:33.130 "reset": true, 00:11:33.130 "nvme_admin": false, 00:11:33.130 "nvme_io": false, 00:11:33.130 "nvme_io_md": false, 00:11:33.130 "write_zeroes": true, 00:11:33.130 "zcopy": true, 00:11:33.130 "get_zone_info": false, 00:11:33.130 "zone_management": false, 00:11:33.130 "zone_append": false, 00:11:33.130 "compare": false, 00:11:33.130 "compare_and_write": false, 00:11:33.130 "abort": true, 00:11:33.130 "seek_hole": false, 00:11:33.130 "seek_data": false, 00:11:33.130 "copy": true, 00:11:33.130 "nvme_iov_md": false 00:11:33.130 }, 00:11:33.130 "memory_domains": [ 00:11:33.130 { 00:11:33.131 "dma_device_id": "system", 00:11:33.131 "dma_device_type": 1 00:11:33.131 }, 00:11:33.131 { 00:11:33.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.131 "dma_device_type": 2 00:11:33.131 } 00:11:33.131 ], 00:11:33.131 "driver_specific": {} 00:11:33.131 } 00:11:33.131 ] 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.131 "name": "Existed_Raid", 00:11:33.131 "uuid": "4bf35644-bee2-4e2e-b8cb-84222febae82", 00:11:33.131 "strip_size_kb": 64, 00:11:33.131 "state": "configuring", 00:11:33.131 "raid_level": "raid0", 00:11:33.131 "superblock": true, 00:11:33.131 "num_base_bdevs": 4, 00:11:33.131 "num_base_bdevs_discovered": 3, 00:11:33.131 "num_base_bdevs_operational": 4, 00:11:33.131 "base_bdevs_list": [ 00:11:33.131 { 00:11:33.131 "name": "BaseBdev1", 00:11:33.131 "uuid": "80064a79-825d-4cd6-9739-8998e9319fe1", 00:11:33.131 "is_configured": true, 00:11:33.131 "data_offset": 2048, 00:11:33.131 "data_size": 63488 00:11:33.131 }, 00:11:33.131 { 00:11:33.131 "name": "BaseBdev2", 00:11:33.131 "uuid": "04c873ae-6c81-4dd8-bbbb-2d5858ac12f3", 00:11:33.131 "is_configured": true, 00:11:33.131 "data_offset": 2048, 00:11:33.131 "data_size": 63488 00:11:33.131 }, 00:11:33.131 { 00:11:33.131 "name": "BaseBdev3", 00:11:33.131 "uuid": "40445d78-1d4d-4a87-adec-ec7bae9c5103", 00:11:33.131 "is_configured": true, 00:11:33.131 "data_offset": 2048, 00:11:33.131 "data_size": 63488 00:11:33.131 }, 00:11:33.131 { 00:11:33.131 "name": "BaseBdev4", 00:11:33.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.131 "is_configured": false, 00:11:33.131 "data_offset": 0, 00:11:33.131 "data_size": 0 00:11:33.131 } 00:11:33.131 ] 00:11:33.131 }' 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.131 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.696 [2024-11-20 14:28:34.566471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.696 [2024-11-20 14:28:34.566919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:33.696 [2024-11-20 14:28:34.566952] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.696 BaseBdev4 00:11:33.696 [2024-11-20 14:28:34.567380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.696 [2024-11-20 14:28:34.567730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:33.696 [2024-11-20 14:28:34.567777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.696 [2024-11-20 14:28:34.568044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.696 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.697 [ 00:11:33.697 { 00:11:33.697 "name": "BaseBdev4", 00:11:33.697 "aliases": [ 00:11:33.697 "4c42dedb-5670-4ba9-bb0e-ce6434926fd7" 00:11:33.697 ], 00:11:33.697 "product_name": "Malloc disk", 00:11:33.697 "block_size": 512, 00:11:33.697 "num_blocks": 65536, 00:11:33.697 "uuid": "4c42dedb-5670-4ba9-bb0e-ce6434926fd7", 00:11:33.697 "assigned_rate_limits": { 00:11:33.697 "rw_ios_per_sec": 0, 00:11:33.697 "rw_mbytes_per_sec": 0, 00:11:33.697 "r_mbytes_per_sec": 0, 00:11:33.697 "w_mbytes_per_sec": 0 00:11:33.697 }, 00:11:33.697 "claimed": true, 00:11:33.697 "claim_type": "exclusive_write", 00:11:33.697 "zoned": false, 00:11:33.697 "supported_io_types": { 00:11:33.697 "read": true, 00:11:33.697 "write": true, 00:11:33.697 "unmap": true, 00:11:33.697 "flush": true, 00:11:33.697 "reset": true, 00:11:33.697 "nvme_admin": false, 00:11:33.697 "nvme_io": false, 00:11:33.697 "nvme_io_md": false, 00:11:33.697 "write_zeroes": true, 00:11:33.697 "zcopy": true, 00:11:33.697 "get_zone_info": false, 00:11:33.697 "zone_management": false, 00:11:33.697 "zone_append": false, 00:11:33.697 "compare": false, 00:11:33.697 "compare_and_write": false, 00:11:33.697 "abort": true, 00:11:33.697 "seek_hole": false, 00:11:33.697 "seek_data": false, 00:11:33.697 "copy": true, 00:11:33.697 "nvme_iov_md": false 00:11:33.697 }, 00:11:33.697 "memory_domains": [ 00:11:33.697 { 00:11:33.697 "dma_device_id": "system", 00:11:33.697 "dma_device_type": 1 00:11:33.697 }, 00:11:33.697 { 00:11:33.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.697 "dma_device_type": 2 00:11:33.697 } 00:11:33.697 ], 00:11:33.697 "driver_specific": {} 00:11:33.697 } 00:11:33.697 ] 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.697 "name": "Existed_Raid", 00:11:33.697 "uuid": "4bf35644-bee2-4e2e-b8cb-84222febae82", 00:11:33.697 "strip_size_kb": 64, 00:11:33.697 "state": "online", 00:11:33.697 "raid_level": "raid0", 00:11:33.697 "superblock": true, 00:11:33.697 "num_base_bdevs": 4, 00:11:33.697 "num_base_bdevs_discovered": 4, 00:11:33.697 "num_base_bdevs_operational": 4, 00:11:33.697 "base_bdevs_list": [ 00:11:33.697 { 00:11:33.697 "name": "BaseBdev1", 00:11:33.697 "uuid": "80064a79-825d-4cd6-9739-8998e9319fe1", 00:11:33.697 "is_configured": true, 00:11:33.697 "data_offset": 2048, 00:11:33.697 "data_size": 63488 00:11:33.697 }, 00:11:33.697 { 00:11:33.697 "name": "BaseBdev2", 00:11:33.697 "uuid": "04c873ae-6c81-4dd8-bbbb-2d5858ac12f3", 00:11:33.697 "is_configured": true, 00:11:33.697 "data_offset": 2048, 00:11:33.697 "data_size": 63488 00:11:33.697 }, 00:11:33.697 { 00:11:33.697 "name": "BaseBdev3", 00:11:33.697 "uuid": "40445d78-1d4d-4a87-adec-ec7bae9c5103", 00:11:33.697 "is_configured": true, 00:11:33.697 "data_offset": 2048, 00:11:33.697 "data_size": 63488 00:11:33.697 }, 00:11:33.697 { 00:11:33.697 "name": "BaseBdev4", 00:11:33.697 "uuid": "4c42dedb-5670-4ba9-bb0e-ce6434926fd7", 00:11:33.697 "is_configured": true, 00:11:33.697 "data_offset": 2048, 00:11:33.697 "data_size": 63488 00:11:33.697 } 00:11:33.697 ] 00:11:33.697 }' 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.697 14:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.263 [2024-11-20 14:28:35.107132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.263 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.263 "name": "Existed_Raid", 00:11:34.263 "aliases": [ 00:11:34.263 "4bf35644-bee2-4e2e-b8cb-84222febae82" 00:11:34.263 ], 00:11:34.263 "product_name": "Raid Volume", 00:11:34.263 "block_size": 512, 00:11:34.263 "num_blocks": 253952, 00:11:34.263 "uuid": "4bf35644-bee2-4e2e-b8cb-84222febae82", 00:11:34.263 "assigned_rate_limits": { 00:11:34.263 "rw_ios_per_sec": 0, 00:11:34.263 "rw_mbytes_per_sec": 0, 00:11:34.263 "r_mbytes_per_sec": 0, 00:11:34.263 "w_mbytes_per_sec": 0 00:11:34.263 }, 00:11:34.263 "claimed": false, 00:11:34.263 "zoned": false, 00:11:34.263 "supported_io_types": { 00:11:34.263 "read": true, 00:11:34.263 "write": true, 00:11:34.263 "unmap": true, 00:11:34.263 "flush": true, 00:11:34.263 "reset": true, 00:11:34.263 "nvme_admin": false, 00:11:34.263 "nvme_io": false, 00:11:34.263 "nvme_io_md": false, 00:11:34.263 "write_zeroes": true, 00:11:34.263 "zcopy": false, 00:11:34.263 "get_zone_info": false, 00:11:34.263 "zone_management": false, 00:11:34.263 "zone_append": false, 00:11:34.263 "compare": false, 00:11:34.263 "compare_and_write": false, 00:11:34.263 "abort": false, 00:11:34.263 "seek_hole": false, 00:11:34.263 "seek_data": false, 00:11:34.263 "copy": false, 00:11:34.263 "nvme_iov_md": false 00:11:34.263 }, 00:11:34.263 "memory_domains": [ 00:11:34.263 { 00:11:34.263 "dma_device_id": "system", 00:11:34.263 "dma_device_type": 1 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.263 "dma_device_type": 2 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "system", 00:11:34.263 "dma_device_type": 1 00:11:34.263 }, 00:11:34.263 { 00:11:34.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.263 "dma_device_type": 2 00:11:34.263 }, 00:11:34.264 { 00:11:34.264 "dma_device_id": "system", 00:11:34.264 "dma_device_type": 1 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.264 "dma_device_type": 2 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "dma_device_id": "system", 00:11:34.264 "dma_device_type": 1 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.264 "dma_device_type": 2 00:11:34.264 } 00:11:34.264 ], 00:11:34.264 "driver_specific": { 00:11:34.264 "raid": { 00:11:34.264 "uuid": "4bf35644-bee2-4e2e-b8cb-84222febae82", 00:11:34.264 "strip_size_kb": 64, 00:11:34.264 "state": "online", 00:11:34.264 "raid_level": "raid0", 00:11:34.264 "superblock": true, 00:11:34.264 "num_base_bdevs": 4, 00:11:34.264 "num_base_bdevs_discovered": 4, 00:11:34.264 "num_base_bdevs_operational": 4, 00:11:34.264 "base_bdevs_list": [ 00:11:34.264 { 00:11:34.264 "name": "BaseBdev1", 00:11:34.264 "uuid": "80064a79-825d-4cd6-9739-8998e9319fe1", 00:11:34.264 "is_configured": true, 00:11:34.264 "data_offset": 2048, 00:11:34.264 "data_size": 63488 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "name": "BaseBdev2", 00:11:34.264 "uuid": "04c873ae-6c81-4dd8-bbbb-2d5858ac12f3", 00:11:34.264 "is_configured": true, 00:11:34.264 "data_offset": 2048, 00:11:34.264 "data_size": 63488 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "name": "BaseBdev3", 00:11:34.264 "uuid": "40445d78-1d4d-4a87-adec-ec7bae9c5103", 00:11:34.264 "is_configured": true, 00:11:34.264 "data_offset": 2048, 00:11:34.264 "data_size": 63488 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "name": "BaseBdev4", 00:11:34.264 "uuid": "4c42dedb-5670-4ba9-bb0e-ce6434926fd7", 00:11:34.264 "is_configured": true, 00:11:34.264 "data_offset": 2048, 00:11:34.264 "data_size": 63488 00:11:34.264 } 00:11:34.264 ] 00:11:34.264 } 00:11:34.264 } 00:11:34.264 }' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:34.264 BaseBdev2 00:11:34.264 BaseBdev3 00:11:34.264 BaseBdev4' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.264 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.522 [2024-11-20 14:28:35.462926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.522 [2024-11-20 14:28:35.462976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.522 [2024-11-20 14:28:35.463079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.522 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.781 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.781 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.781 "name": "Existed_Raid", 00:11:34.781 "uuid": "4bf35644-bee2-4e2e-b8cb-84222febae82", 00:11:34.781 "strip_size_kb": 64, 00:11:34.781 "state": "offline", 00:11:34.781 "raid_level": "raid0", 00:11:34.781 "superblock": true, 00:11:34.781 "num_base_bdevs": 4, 00:11:34.781 "num_base_bdevs_discovered": 3, 00:11:34.781 "num_base_bdevs_operational": 3, 00:11:34.781 "base_bdevs_list": [ 00:11:34.781 { 00:11:34.781 "name": null, 00:11:34.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.781 "is_configured": false, 00:11:34.781 "data_offset": 0, 00:11:34.781 "data_size": 63488 00:11:34.781 }, 00:11:34.781 { 00:11:34.781 "name": "BaseBdev2", 00:11:34.781 "uuid": "04c873ae-6c81-4dd8-bbbb-2d5858ac12f3", 00:11:34.781 "is_configured": true, 00:11:34.781 "data_offset": 2048, 00:11:34.781 "data_size": 63488 00:11:34.781 }, 00:11:34.781 { 00:11:34.781 "name": "BaseBdev3", 00:11:34.781 "uuid": "40445d78-1d4d-4a87-adec-ec7bae9c5103", 00:11:34.781 "is_configured": true, 00:11:34.781 "data_offset": 2048, 00:11:34.781 "data_size": 63488 00:11:34.781 }, 00:11:34.781 { 00:11:34.781 "name": "BaseBdev4", 00:11:34.781 "uuid": "4c42dedb-5670-4ba9-bb0e-ce6434926fd7", 00:11:34.781 "is_configured": true, 00:11:34.781 "data_offset": 2048, 00:11:34.781 "data_size": 63488 00:11:34.781 } 00:11:34.781 ] 00:11:34.781 }' 00:11:34.781 14:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.781 14:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.347 [2024-11-20 14:28:36.191866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.347 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.347 [2024-11-20 14:28:36.338534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.605 [2024-11-20 14:28:36.477895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:35.605 [2024-11-20 14:28:36.477969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.605 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.864 BaseBdev2 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.864 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.864 [ 00:11:35.864 { 00:11:35.864 "name": "BaseBdev2", 00:11:35.864 "aliases": [ 00:11:35.864 "2b96419b-c7c0-4de7-8991-d4d9d2c73453" 00:11:35.864 ], 00:11:35.864 "product_name": "Malloc disk", 00:11:35.864 "block_size": 512, 00:11:35.864 "num_blocks": 65536, 00:11:35.864 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:35.864 "assigned_rate_limits": { 00:11:35.864 "rw_ios_per_sec": 0, 00:11:35.864 "rw_mbytes_per_sec": 0, 00:11:35.864 "r_mbytes_per_sec": 0, 00:11:35.864 "w_mbytes_per_sec": 0 00:11:35.864 }, 00:11:35.864 "claimed": false, 00:11:35.864 "zoned": false, 00:11:35.864 "supported_io_types": { 00:11:35.864 "read": true, 00:11:35.864 "write": true, 00:11:35.864 "unmap": true, 00:11:35.864 "flush": true, 00:11:35.864 "reset": true, 00:11:35.864 "nvme_admin": false, 00:11:35.864 "nvme_io": false, 00:11:35.864 "nvme_io_md": false, 00:11:35.864 "write_zeroes": true, 00:11:35.864 "zcopy": true, 00:11:35.864 "get_zone_info": false, 00:11:35.864 "zone_management": false, 00:11:35.864 "zone_append": false, 00:11:35.864 "compare": false, 00:11:35.864 "compare_and_write": false, 00:11:35.864 "abort": true, 00:11:35.864 "seek_hole": false, 00:11:35.864 "seek_data": false, 00:11:35.864 "copy": true, 00:11:35.864 "nvme_iov_md": false 00:11:35.864 }, 00:11:35.864 "memory_domains": [ 00:11:35.864 { 00:11:35.864 "dma_device_id": "system", 00:11:35.864 "dma_device_type": 1 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.864 "dma_device_type": 2 00:11:35.864 } 00:11:35.864 ], 00:11:35.865 "driver_specific": {} 00:11:35.865 } 00:11:35.865 ] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.865 BaseBdev3 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.865 [ 00:11:35.865 { 00:11:35.865 "name": "BaseBdev3", 00:11:35.865 "aliases": [ 00:11:35.865 "6d26f390-9602-4b30-a885-6928cd57478c" 00:11:35.865 ], 00:11:35.865 "product_name": "Malloc disk", 00:11:35.865 "block_size": 512, 00:11:35.865 "num_blocks": 65536, 00:11:35.865 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:35.865 "assigned_rate_limits": { 00:11:35.865 "rw_ios_per_sec": 0, 00:11:35.865 "rw_mbytes_per_sec": 0, 00:11:35.865 "r_mbytes_per_sec": 0, 00:11:35.865 "w_mbytes_per_sec": 0 00:11:35.865 }, 00:11:35.865 "claimed": false, 00:11:35.865 "zoned": false, 00:11:35.865 "supported_io_types": { 00:11:35.865 "read": true, 00:11:35.865 "write": true, 00:11:35.865 "unmap": true, 00:11:35.865 "flush": true, 00:11:35.865 "reset": true, 00:11:35.865 "nvme_admin": false, 00:11:35.865 "nvme_io": false, 00:11:35.865 "nvme_io_md": false, 00:11:35.865 "write_zeroes": true, 00:11:35.865 "zcopy": true, 00:11:35.865 "get_zone_info": false, 00:11:35.865 "zone_management": false, 00:11:35.865 "zone_append": false, 00:11:35.865 "compare": false, 00:11:35.865 "compare_and_write": false, 00:11:35.865 "abort": true, 00:11:35.865 "seek_hole": false, 00:11:35.865 "seek_data": false, 00:11:35.865 "copy": true, 00:11:35.865 "nvme_iov_md": false 00:11:35.865 }, 00:11:35.865 "memory_domains": [ 00:11:35.865 { 00:11:35.865 "dma_device_id": "system", 00:11:35.865 "dma_device_type": 1 00:11:35.865 }, 00:11:35.865 { 00:11:35.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.865 "dma_device_type": 2 00:11:35.865 } 00:11:35.865 ], 00:11:35.865 "driver_specific": {} 00:11:35.865 } 00:11:35.865 ] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.865 BaseBdev4 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.865 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.865 [ 00:11:35.865 { 00:11:35.865 "name": "BaseBdev4", 00:11:35.865 "aliases": [ 00:11:35.865 "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd" 00:11:35.865 ], 00:11:35.865 "product_name": "Malloc disk", 00:11:35.865 "block_size": 512, 00:11:35.865 "num_blocks": 65536, 00:11:35.865 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:35.865 "assigned_rate_limits": { 00:11:35.865 "rw_ios_per_sec": 0, 00:11:35.865 "rw_mbytes_per_sec": 0, 00:11:35.865 "r_mbytes_per_sec": 0, 00:11:35.865 "w_mbytes_per_sec": 0 00:11:35.865 }, 00:11:35.865 "claimed": false, 00:11:35.865 "zoned": false, 00:11:35.865 "supported_io_types": { 00:11:35.865 "read": true, 00:11:35.865 "write": true, 00:11:35.865 "unmap": true, 00:11:35.865 "flush": true, 00:11:35.865 "reset": true, 00:11:35.865 "nvme_admin": false, 00:11:35.865 "nvme_io": false, 00:11:35.865 "nvme_io_md": false, 00:11:35.865 "write_zeroes": true, 00:11:35.865 "zcopy": true, 00:11:35.865 "get_zone_info": false, 00:11:35.865 "zone_management": false, 00:11:35.866 "zone_append": false, 00:11:35.866 "compare": false, 00:11:35.866 "compare_and_write": false, 00:11:35.866 "abort": true, 00:11:35.866 "seek_hole": false, 00:11:35.866 "seek_data": false, 00:11:35.866 "copy": true, 00:11:35.866 "nvme_iov_md": false 00:11:35.866 }, 00:11:35.866 "memory_domains": [ 00:11:35.866 { 00:11:35.866 "dma_device_id": "system", 00:11:35.866 "dma_device_type": 1 00:11:35.866 }, 00:11:35.866 { 00:11:35.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.866 "dma_device_type": 2 00:11:35.866 } 00:11:35.866 ], 00:11:35.866 "driver_specific": {} 00:11:35.866 } 00:11:35.866 ] 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.866 [2024-11-20 14:28:36.848746] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.866 [2024-11-20 14:28:36.848817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.866 [2024-11-20 14:28:36.848876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.866 [2024-11-20 14:28:36.851778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.866 [2024-11-20 14:28:36.851866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.866 "name": "Existed_Raid", 00:11:35.866 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:35.866 "strip_size_kb": 64, 00:11:35.866 "state": "configuring", 00:11:35.866 "raid_level": "raid0", 00:11:35.866 "superblock": true, 00:11:35.866 "num_base_bdevs": 4, 00:11:35.866 "num_base_bdevs_discovered": 3, 00:11:35.866 "num_base_bdevs_operational": 4, 00:11:35.866 "base_bdevs_list": [ 00:11:35.866 { 00:11:35.866 "name": "BaseBdev1", 00:11:35.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.866 "is_configured": false, 00:11:35.866 "data_offset": 0, 00:11:35.866 "data_size": 0 00:11:35.866 }, 00:11:35.866 { 00:11:35.866 "name": "BaseBdev2", 00:11:35.866 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:35.866 "is_configured": true, 00:11:35.866 "data_offset": 2048, 00:11:35.866 "data_size": 63488 00:11:35.866 }, 00:11:35.866 { 00:11:35.866 "name": "BaseBdev3", 00:11:35.866 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:35.866 "is_configured": true, 00:11:35.866 "data_offset": 2048, 00:11:35.866 "data_size": 63488 00:11:35.866 }, 00:11:35.866 { 00:11:35.866 "name": "BaseBdev4", 00:11:35.866 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:35.866 "is_configured": true, 00:11:35.866 "data_offset": 2048, 00:11:35.866 "data_size": 63488 00:11:35.866 } 00:11:35.866 ] 00:11:35.866 }' 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.866 14:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.431 [2024-11-20 14:28:37.404925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.431 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.431 "name": "Existed_Raid", 00:11:36.431 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:36.431 "strip_size_kb": 64, 00:11:36.431 "state": "configuring", 00:11:36.431 "raid_level": "raid0", 00:11:36.431 "superblock": true, 00:11:36.431 "num_base_bdevs": 4, 00:11:36.431 "num_base_bdevs_discovered": 2, 00:11:36.431 "num_base_bdevs_operational": 4, 00:11:36.431 "base_bdevs_list": [ 00:11:36.431 { 00:11:36.431 "name": "BaseBdev1", 00:11:36.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.431 "is_configured": false, 00:11:36.431 "data_offset": 0, 00:11:36.431 "data_size": 0 00:11:36.431 }, 00:11:36.431 { 00:11:36.431 "name": null, 00:11:36.431 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:36.431 "is_configured": false, 00:11:36.431 "data_offset": 0, 00:11:36.432 "data_size": 63488 00:11:36.432 }, 00:11:36.432 { 00:11:36.432 "name": "BaseBdev3", 00:11:36.432 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:36.432 "is_configured": true, 00:11:36.432 "data_offset": 2048, 00:11:36.432 "data_size": 63488 00:11:36.432 }, 00:11:36.432 { 00:11:36.432 "name": "BaseBdev4", 00:11:36.432 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:36.432 "is_configured": true, 00:11:36.432 "data_offset": 2048, 00:11:36.432 "data_size": 63488 00:11:36.432 } 00:11:36.432 ] 00:11:36.432 }' 00:11:36.432 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.432 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.997 14:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.997 [2024-11-20 14:28:38.029769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.997 BaseBdev1 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.997 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.997 [ 00:11:36.997 { 00:11:36.997 "name": "BaseBdev1", 00:11:36.997 "aliases": [ 00:11:36.997 "0814a701-2025-48b7-adb3-7eaab281a7a2" 00:11:36.997 ], 00:11:36.997 "product_name": "Malloc disk", 00:11:36.997 "block_size": 512, 00:11:36.997 "num_blocks": 65536, 00:11:36.997 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:36.997 "assigned_rate_limits": { 00:11:36.997 "rw_ios_per_sec": 0, 00:11:36.997 "rw_mbytes_per_sec": 0, 00:11:36.997 "r_mbytes_per_sec": 0, 00:11:36.997 "w_mbytes_per_sec": 0 00:11:36.997 }, 00:11:36.997 "claimed": true, 00:11:36.997 "claim_type": "exclusive_write", 00:11:36.997 "zoned": false, 00:11:36.997 "supported_io_types": { 00:11:36.997 "read": true, 00:11:36.997 "write": true, 00:11:36.997 "unmap": true, 00:11:36.997 "flush": true, 00:11:36.997 "reset": true, 00:11:36.997 "nvme_admin": false, 00:11:36.997 "nvme_io": false, 00:11:36.997 "nvme_io_md": false, 00:11:36.997 "write_zeroes": true, 00:11:36.997 "zcopy": true, 00:11:36.997 "get_zone_info": false, 00:11:36.997 "zone_management": false, 00:11:36.997 "zone_append": false, 00:11:36.997 "compare": false, 00:11:36.997 "compare_and_write": false, 00:11:36.997 "abort": true, 00:11:36.997 "seek_hole": false, 00:11:36.997 "seek_data": false, 00:11:36.997 "copy": true, 00:11:37.254 "nvme_iov_md": false 00:11:37.254 }, 00:11:37.254 "memory_domains": [ 00:11:37.254 { 00:11:37.254 "dma_device_id": "system", 00:11:37.254 "dma_device_type": 1 00:11:37.254 }, 00:11:37.254 { 00:11:37.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.254 "dma_device_type": 2 00:11:37.254 } 00:11:37.254 ], 00:11:37.254 "driver_specific": {} 00:11:37.254 } 00:11:37.254 ] 00:11:37.254 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.254 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.254 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:37.254 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.255 "name": "Existed_Raid", 00:11:37.255 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:37.255 "strip_size_kb": 64, 00:11:37.255 "state": "configuring", 00:11:37.255 "raid_level": "raid0", 00:11:37.255 "superblock": true, 00:11:37.255 "num_base_bdevs": 4, 00:11:37.255 "num_base_bdevs_discovered": 3, 00:11:37.255 "num_base_bdevs_operational": 4, 00:11:37.255 "base_bdevs_list": [ 00:11:37.255 { 00:11:37.255 "name": "BaseBdev1", 00:11:37.255 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:37.255 "is_configured": true, 00:11:37.255 "data_offset": 2048, 00:11:37.255 "data_size": 63488 00:11:37.255 }, 00:11:37.255 { 00:11:37.255 "name": null, 00:11:37.255 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:37.255 "is_configured": false, 00:11:37.255 "data_offset": 0, 00:11:37.255 "data_size": 63488 00:11:37.255 }, 00:11:37.255 { 00:11:37.255 "name": "BaseBdev3", 00:11:37.255 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:37.255 "is_configured": true, 00:11:37.255 "data_offset": 2048, 00:11:37.255 "data_size": 63488 00:11:37.255 }, 00:11:37.255 { 00:11:37.255 "name": "BaseBdev4", 00:11:37.255 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:37.255 "is_configured": true, 00:11:37.255 "data_offset": 2048, 00:11:37.255 "data_size": 63488 00:11:37.255 } 00:11:37.255 ] 00:11:37.255 }' 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.255 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.512 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:37.512 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.512 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.512 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 [2024-11-20 14:28:38.610065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.771 "name": "Existed_Raid", 00:11:37.771 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:37.771 "strip_size_kb": 64, 00:11:37.771 "state": "configuring", 00:11:37.771 "raid_level": "raid0", 00:11:37.771 "superblock": true, 00:11:37.771 "num_base_bdevs": 4, 00:11:37.771 "num_base_bdevs_discovered": 2, 00:11:37.771 "num_base_bdevs_operational": 4, 00:11:37.771 "base_bdevs_list": [ 00:11:37.771 { 00:11:37.771 "name": "BaseBdev1", 00:11:37.771 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:37.771 "is_configured": true, 00:11:37.771 "data_offset": 2048, 00:11:37.771 "data_size": 63488 00:11:37.771 }, 00:11:37.771 { 00:11:37.771 "name": null, 00:11:37.771 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:37.771 "is_configured": false, 00:11:37.771 "data_offset": 0, 00:11:37.771 "data_size": 63488 00:11:37.771 }, 00:11:37.771 { 00:11:37.771 "name": null, 00:11:37.771 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:37.771 "is_configured": false, 00:11:37.771 "data_offset": 0, 00:11:37.771 "data_size": 63488 00:11:37.771 }, 00:11:37.771 { 00:11:37.771 "name": "BaseBdev4", 00:11:37.771 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:37.771 "is_configured": true, 00:11:37.771 "data_offset": 2048, 00:11:37.771 "data_size": 63488 00:11:37.771 } 00:11:37.771 ] 00:11:37.771 }' 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.771 14:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.347 [2024-11-20 14:28:39.142153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.347 "name": "Existed_Raid", 00:11:38.347 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:38.347 "strip_size_kb": 64, 00:11:38.347 "state": "configuring", 00:11:38.347 "raid_level": "raid0", 00:11:38.347 "superblock": true, 00:11:38.347 "num_base_bdevs": 4, 00:11:38.347 "num_base_bdevs_discovered": 3, 00:11:38.347 "num_base_bdevs_operational": 4, 00:11:38.347 "base_bdevs_list": [ 00:11:38.347 { 00:11:38.347 "name": "BaseBdev1", 00:11:38.347 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:38.347 "is_configured": true, 00:11:38.347 "data_offset": 2048, 00:11:38.347 "data_size": 63488 00:11:38.347 }, 00:11:38.347 { 00:11:38.347 "name": null, 00:11:38.347 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:38.347 "is_configured": false, 00:11:38.347 "data_offset": 0, 00:11:38.347 "data_size": 63488 00:11:38.347 }, 00:11:38.347 { 00:11:38.347 "name": "BaseBdev3", 00:11:38.347 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:38.347 "is_configured": true, 00:11:38.347 "data_offset": 2048, 00:11:38.347 "data_size": 63488 00:11:38.347 }, 00:11:38.347 { 00:11:38.347 "name": "BaseBdev4", 00:11:38.347 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:38.347 "is_configured": true, 00:11:38.347 "data_offset": 2048, 00:11:38.347 "data_size": 63488 00:11:38.347 } 00:11:38.347 ] 00:11:38.347 }' 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.347 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.604 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.604 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.604 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.604 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:38.604 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.861 [2024-11-20 14:28:39.670357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.861 "name": "Existed_Raid", 00:11:38.861 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:38.861 "strip_size_kb": 64, 00:11:38.861 "state": "configuring", 00:11:38.861 "raid_level": "raid0", 00:11:38.861 "superblock": true, 00:11:38.861 "num_base_bdevs": 4, 00:11:38.861 "num_base_bdevs_discovered": 2, 00:11:38.861 "num_base_bdevs_operational": 4, 00:11:38.861 "base_bdevs_list": [ 00:11:38.861 { 00:11:38.861 "name": null, 00:11:38.861 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:38.861 "is_configured": false, 00:11:38.861 "data_offset": 0, 00:11:38.861 "data_size": 63488 00:11:38.861 }, 00:11:38.861 { 00:11:38.861 "name": null, 00:11:38.861 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:38.861 "is_configured": false, 00:11:38.861 "data_offset": 0, 00:11:38.861 "data_size": 63488 00:11:38.861 }, 00:11:38.861 { 00:11:38.861 "name": "BaseBdev3", 00:11:38.861 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:38.861 "is_configured": true, 00:11:38.861 "data_offset": 2048, 00:11:38.861 "data_size": 63488 00:11:38.861 }, 00:11:38.861 { 00:11:38.861 "name": "BaseBdev4", 00:11:38.861 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:38.861 "is_configured": true, 00:11:38.861 "data_offset": 2048, 00:11:38.861 "data_size": 63488 00:11:38.861 } 00:11:38.861 ] 00:11:38.861 }' 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.861 14:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.426 [2024-11-20 14:28:40.342837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.426 "name": "Existed_Raid", 00:11:39.426 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:39.426 "strip_size_kb": 64, 00:11:39.426 "state": "configuring", 00:11:39.426 "raid_level": "raid0", 00:11:39.426 "superblock": true, 00:11:39.426 "num_base_bdevs": 4, 00:11:39.426 "num_base_bdevs_discovered": 3, 00:11:39.426 "num_base_bdevs_operational": 4, 00:11:39.426 "base_bdevs_list": [ 00:11:39.426 { 00:11:39.426 "name": null, 00:11:39.426 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:39.426 "is_configured": false, 00:11:39.426 "data_offset": 0, 00:11:39.426 "data_size": 63488 00:11:39.426 }, 00:11:39.426 { 00:11:39.426 "name": "BaseBdev2", 00:11:39.426 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:39.426 "is_configured": true, 00:11:39.426 "data_offset": 2048, 00:11:39.426 "data_size": 63488 00:11:39.426 }, 00:11:39.426 { 00:11:39.426 "name": "BaseBdev3", 00:11:39.426 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:39.426 "is_configured": true, 00:11:39.426 "data_offset": 2048, 00:11:39.426 "data_size": 63488 00:11:39.426 }, 00:11:39.426 { 00:11:39.426 "name": "BaseBdev4", 00:11:39.426 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:39.426 "is_configured": true, 00:11:39.426 "data_offset": 2048, 00:11:39.426 "data_size": 63488 00:11:39.426 } 00:11:39.426 ] 00:11:39.426 }' 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.426 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0814a701-2025-48b7-adb3-7eaab281a7a2 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.992 [2024-11-20 14:28:40.974734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:39.992 [2024-11-20 14:28:40.975324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:39.992 [2024-11-20 14:28:40.975359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:39.992 NewBaseBdev 00:11:39.992 [2024-11-20 14:28:40.975847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.992 [2024-11-20 14:28:40.976138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:39.992 [2024-11-20 14:28:40.976174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:39.992 [2024-11-20 14:28:40.976444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.992 [ 00:11:39.992 { 00:11:39.992 "name": "NewBaseBdev", 00:11:39.992 "aliases": [ 00:11:39.992 "0814a701-2025-48b7-adb3-7eaab281a7a2" 00:11:39.992 ], 00:11:39.992 "product_name": "Malloc disk", 00:11:39.992 "block_size": 512, 00:11:39.992 "num_blocks": 65536, 00:11:39.992 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:39.992 "assigned_rate_limits": { 00:11:39.992 "rw_ios_per_sec": 0, 00:11:39.992 "rw_mbytes_per_sec": 0, 00:11:39.992 "r_mbytes_per_sec": 0, 00:11:39.992 "w_mbytes_per_sec": 0 00:11:39.992 }, 00:11:39.992 "claimed": true, 00:11:39.992 "claim_type": "exclusive_write", 00:11:39.992 "zoned": false, 00:11:39.992 "supported_io_types": { 00:11:39.992 "read": true, 00:11:39.992 "write": true, 00:11:39.992 "unmap": true, 00:11:39.992 "flush": true, 00:11:39.992 "reset": true, 00:11:39.992 "nvme_admin": false, 00:11:39.992 "nvme_io": false, 00:11:39.992 "nvme_io_md": false, 00:11:39.992 "write_zeroes": true, 00:11:39.992 "zcopy": true, 00:11:39.992 "get_zone_info": false, 00:11:39.992 "zone_management": false, 00:11:39.992 "zone_append": false, 00:11:39.992 "compare": false, 00:11:39.992 "compare_and_write": false, 00:11:39.992 "abort": true, 00:11:39.992 "seek_hole": false, 00:11:39.992 "seek_data": false, 00:11:39.992 "copy": true, 00:11:39.992 "nvme_iov_md": false 00:11:39.992 }, 00:11:39.992 "memory_domains": [ 00:11:39.992 { 00:11:39.992 "dma_device_id": "system", 00:11:39.992 "dma_device_type": 1 00:11:39.992 }, 00:11:39.992 { 00:11:39.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.992 "dma_device_type": 2 00:11:39.992 } 00:11:39.992 ], 00:11:39.992 "driver_specific": {} 00:11:39.992 } 00:11:39.992 ] 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.992 14:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.992 "name": "Existed_Raid", 00:11:39.992 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:39.992 "strip_size_kb": 64, 00:11:39.992 "state": "online", 00:11:39.992 "raid_level": "raid0", 00:11:39.992 "superblock": true, 00:11:39.992 "num_base_bdevs": 4, 00:11:39.992 "num_base_bdevs_discovered": 4, 00:11:39.992 "num_base_bdevs_operational": 4, 00:11:39.992 "base_bdevs_list": [ 00:11:39.992 { 00:11:39.992 "name": "NewBaseBdev", 00:11:39.992 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:39.992 "is_configured": true, 00:11:39.992 "data_offset": 2048, 00:11:39.992 "data_size": 63488 00:11:39.992 }, 00:11:39.992 { 00:11:39.992 "name": "BaseBdev2", 00:11:39.992 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:39.992 "is_configured": true, 00:11:39.992 "data_offset": 2048, 00:11:39.992 "data_size": 63488 00:11:39.992 }, 00:11:39.992 { 00:11:39.992 "name": "BaseBdev3", 00:11:39.992 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:39.992 "is_configured": true, 00:11:39.992 "data_offset": 2048, 00:11:39.992 "data_size": 63488 00:11:39.992 }, 00:11:39.992 { 00:11:39.992 "name": "BaseBdev4", 00:11:39.992 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:39.992 "is_configured": true, 00:11:39.992 "data_offset": 2048, 00:11:39.992 "data_size": 63488 00:11:39.992 } 00:11:39.992 ] 00:11:39.992 }' 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.992 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.565 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.566 [2024-11-20 14:28:41.503393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.566 "name": "Existed_Raid", 00:11:40.566 "aliases": [ 00:11:40.566 "f3918b73-3483-4164-a042-d3b2f737f00a" 00:11:40.566 ], 00:11:40.566 "product_name": "Raid Volume", 00:11:40.566 "block_size": 512, 00:11:40.566 "num_blocks": 253952, 00:11:40.566 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:40.566 "assigned_rate_limits": { 00:11:40.566 "rw_ios_per_sec": 0, 00:11:40.566 "rw_mbytes_per_sec": 0, 00:11:40.566 "r_mbytes_per_sec": 0, 00:11:40.566 "w_mbytes_per_sec": 0 00:11:40.566 }, 00:11:40.566 "claimed": false, 00:11:40.566 "zoned": false, 00:11:40.566 "supported_io_types": { 00:11:40.566 "read": true, 00:11:40.566 "write": true, 00:11:40.566 "unmap": true, 00:11:40.566 "flush": true, 00:11:40.566 "reset": true, 00:11:40.566 "nvme_admin": false, 00:11:40.566 "nvme_io": false, 00:11:40.566 "nvme_io_md": false, 00:11:40.566 "write_zeroes": true, 00:11:40.566 "zcopy": false, 00:11:40.566 "get_zone_info": false, 00:11:40.566 "zone_management": false, 00:11:40.566 "zone_append": false, 00:11:40.566 "compare": false, 00:11:40.566 "compare_and_write": false, 00:11:40.566 "abort": false, 00:11:40.566 "seek_hole": false, 00:11:40.566 "seek_data": false, 00:11:40.566 "copy": false, 00:11:40.566 "nvme_iov_md": false 00:11:40.566 }, 00:11:40.566 "memory_domains": [ 00:11:40.566 { 00:11:40.566 "dma_device_id": "system", 00:11:40.566 "dma_device_type": 1 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.566 "dma_device_type": 2 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "dma_device_id": "system", 00:11:40.566 "dma_device_type": 1 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.566 "dma_device_type": 2 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "dma_device_id": "system", 00:11:40.566 "dma_device_type": 1 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.566 "dma_device_type": 2 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "dma_device_id": "system", 00:11:40.566 "dma_device_type": 1 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.566 "dma_device_type": 2 00:11:40.566 } 00:11:40.566 ], 00:11:40.566 "driver_specific": { 00:11:40.566 "raid": { 00:11:40.566 "uuid": "f3918b73-3483-4164-a042-d3b2f737f00a", 00:11:40.566 "strip_size_kb": 64, 00:11:40.566 "state": "online", 00:11:40.566 "raid_level": "raid0", 00:11:40.566 "superblock": true, 00:11:40.566 "num_base_bdevs": 4, 00:11:40.566 "num_base_bdevs_discovered": 4, 00:11:40.566 "num_base_bdevs_operational": 4, 00:11:40.566 "base_bdevs_list": [ 00:11:40.566 { 00:11:40.566 "name": "NewBaseBdev", 00:11:40.566 "uuid": "0814a701-2025-48b7-adb3-7eaab281a7a2", 00:11:40.566 "is_configured": true, 00:11:40.566 "data_offset": 2048, 00:11:40.566 "data_size": 63488 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "name": "BaseBdev2", 00:11:40.566 "uuid": "2b96419b-c7c0-4de7-8991-d4d9d2c73453", 00:11:40.566 "is_configured": true, 00:11:40.566 "data_offset": 2048, 00:11:40.566 "data_size": 63488 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "name": "BaseBdev3", 00:11:40.566 "uuid": "6d26f390-9602-4b30-a885-6928cd57478c", 00:11:40.566 "is_configured": true, 00:11:40.566 "data_offset": 2048, 00:11:40.566 "data_size": 63488 00:11:40.566 }, 00:11:40.566 { 00:11:40.566 "name": "BaseBdev4", 00:11:40.566 "uuid": "e2b44dbc-5684-4fd4-93d3-0ab11f7ab6fd", 00:11:40.566 "is_configured": true, 00:11:40.566 "data_offset": 2048, 00:11:40.566 "data_size": 63488 00:11:40.566 } 00:11:40.566 ] 00:11:40.566 } 00:11:40.566 } 00:11:40.566 }' 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:40.566 BaseBdev2 00:11:40.566 BaseBdev3 00:11:40.566 BaseBdev4' 00:11:40.566 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.823 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.079 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.079 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.079 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.079 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.079 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.079 [2024-11-20 14:28:41.883035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.079 [2024-11-20 14:28:41.883236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.079 [2024-11-20 14:28:41.883558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.079 [2024-11-20 14:28:41.883745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.079 [2024-11-20 14:28:41.883770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.079 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.079 14:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70231 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70231 ']' 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70231 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70231 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70231' 00:11:41.080 killing process with pid 70231 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70231 00:11:41.080 [2024-11-20 14:28:41.929488] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.080 14:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70231 00:11:41.335 [2024-11-20 14:28:42.286138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.706 ************************************ 00:11:42.706 END TEST raid_state_function_test_sb 00:11:42.706 ************************************ 00:11:42.706 14:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:42.706 00:11:42.706 real 0m12.949s 00:11:42.706 user 0m21.416s 00:11:42.706 sys 0m1.783s 00:11:42.706 14:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.706 14:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.706 14:28:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:42.706 14:28:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.706 14:28:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.706 14:28:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.706 ************************************ 00:11:42.706 START TEST raid_superblock_test 00:11:42.706 ************************************ 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:42.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70917 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70917 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70917 ']' 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.706 14:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.706 [2024-11-20 14:28:43.503563] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:42.706 [2024-11-20 14:28:43.503967] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70917 ] 00:11:42.965 [2024-11-20 14:28:43.764832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.965 [2024-11-20 14:28:43.895459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.223 [2024-11-20 14:28:44.098891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.223 [2024-11-20 14:28:44.098952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.481 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.481 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:43.481 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:43.481 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.481 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.482 malloc1 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.482 [2024-11-20 14:28:44.508030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:43.482 [2024-11-20 14:28:44.508244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.482 [2024-11-20 14:28:44.508322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:43.482 [2024-11-20 14:28:44.508493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.482 [2024-11-20 14:28:44.511396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.482 [2024-11-20 14:28:44.511566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:43.482 pt1 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.482 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.740 malloc2 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.740 [2024-11-20 14:28:44.564390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:43.740 [2024-11-20 14:28:44.564463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.740 [2024-11-20 14:28:44.564501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:43.740 [2024-11-20 14:28:44.564517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.740 [2024-11-20 14:28:44.567361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.740 [2024-11-20 14:28:44.567407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:43.740 pt2 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.740 malloc3 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.740 [2024-11-20 14:28:44.629266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:43.740 [2024-11-20 14:28:44.629335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.740 [2024-11-20 14:28:44.629369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:43.740 [2024-11-20 14:28:44.629385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.740 [2024-11-20 14:28:44.632174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.740 [2024-11-20 14:28:44.632220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:43.740 pt3 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:43.740 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.741 malloc4 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.741 [2024-11-20 14:28:44.685879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:43.741 [2024-11-20 14:28:44.685954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.741 [2024-11-20 14:28:44.685985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:43.741 [2024-11-20 14:28:44.686001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.741 [2024-11-20 14:28:44.688793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.741 [2024-11-20 14:28:44.688837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:43.741 pt4 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.741 [2024-11-20 14:28:44.693909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:43.741 [2024-11-20 14:28:44.696321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.741 [2024-11-20 14:28:44.696447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:43.741 [2024-11-20 14:28:44.696524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:43.741 [2024-11-20 14:28:44.696816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:43.741 [2024-11-20 14:28:44.696835] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:43.741 [2024-11-20 14:28:44.697153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:43.741 [2024-11-20 14:28:44.697371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:43.741 [2024-11-20 14:28:44.697393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:43.741 [2024-11-20 14:28:44.697574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.741 "name": "raid_bdev1", 00:11:43.741 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:43.741 "strip_size_kb": 64, 00:11:43.741 "state": "online", 00:11:43.741 "raid_level": "raid0", 00:11:43.741 "superblock": true, 00:11:43.741 "num_base_bdevs": 4, 00:11:43.741 "num_base_bdevs_discovered": 4, 00:11:43.741 "num_base_bdevs_operational": 4, 00:11:43.741 "base_bdevs_list": [ 00:11:43.741 { 00:11:43.741 "name": "pt1", 00:11:43.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.741 "is_configured": true, 00:11:43.741 "data_offset": 2048, 00:11:43.741 "data_size": 63488 00:11:43.741 }, 00:11:43.741 { 00:11:43.741 "name": "pt2", 00:11:43.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.741 "is_configured": true, 00:11:43.741 "data_offset": 2048, 00:11:43.741 "data_size": 63488 00:11:43.741 }, 00:11:43.741 { 00:11:43.741 "name": "pt3", 00:11:43.741 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.741 "is_configured": true, 00:11:43.741 "data_offset": 2048, 00:11:43.741 "data_size": 63488 00:11:43.741 }, 00:11:43.741 { 00:11:43.741 "name": "pt4", 00:11:43.741 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:43.741 "is_configured": true, 00:11:43.741 "data_offset": 2048, 00:11:43.741 "data_size": 63488 00:11:43.741 } 00:11:43.741 ] 00:11:43.741 }' 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.741 14:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.307 [2024-11-20 14:28:45.218476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.307 "name": "raid_bdev1", 00:11:44.307 "aliases": [ 00:11:44.307 "98ef2736-31da-4bef-a570-d439dbdd3d90" 00:11:44.307 ], 00:11:44.307 "product_name": "Raid Volume", 00:11:44.307 "block_size": 512, 00:11:44.307 "num_blocks": 253952, 00:11:44.307 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:44.307 "assigned_rate_limits": { 00:11:44.307 "rw_ios_per_sec": 0, 00:11:44.307 "rw_mbytes_per_sec": 0, 00:11:44.307 "r_mbytes_per_sec": 0, 00:11:44.307 "w_mbytes_per_sec": 0 00:11:44.307 }, 00:11:44.307 "claimed": false, 00:11:44.307 "zoned": false, 00:11:44.307 "supported_io_types": { 00:11:44.307 "read": true, 00:11:44.307 "write": true, 00:11:44.307 "unmap": true, 00:11:44.307 "flush": true, 00:11:44.307 "reset": true, 00:11:44.307 "nvme_admin": false, 00:11:44.307 "nvme_io": false, 00:11:44.307 "nvme_io_md": false, 00:11:44.307 "write_zeroes": true, 00:11:44.307 "zcopy": false, 00:11:44.307 "get_zone_info": false, 00:11:44.307 "zone_management": false, 00:11:44.307 "zone_append": false, 00:11:44.307 "compare": false, 00:11:44.307 "compare_and_write": false, 00:11:44.307 "abort": false, 00:11:44.307 "seek_hole": false, 00:11:44.307 "seek_data": false, 00:11:44.307 "copy": false, 00:11:44.307 "nvme_iov_md": false 00:11:44.307 }, 00:11:44.307 "memory_domains": [ 00:11:44.307 { 00:11:44.307 "dma_device_id": "system", 00:11:44.307 "dma_device_type": 1 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.307 "dma_device_type": 2 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "dma_device_id": "system", 00:11:44.307 "dma_device_type": 1 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.307 "dma_device_type": 2 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "dma_device_id": "system", 00:11:44.307 "dma_device_type": 1 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.307 "dma_device_type": 2 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "dma_device_id": "system", 00:11:44.307 "dma_device_type": 1 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.307 "dma_device_type": 2 00:11:44.307 } 00:11:44.307 ], 00:11:44.307 "driver_specific": { 00:11:44.307 "raid": { 00:11:44.307 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:44.307 "strip_size_kb": 64, 00:11:44.307 "state": "online", 00:11:44.307 "raid_level": "raid0", 00:11:44.307 "superblock": true, 00:11:44.307 "num_base_bdevs": 4, 00:11:44.307 "num_base_bdevs_discovered": 4, 00:11:44.307 "num_base_bdevs_operational": 4, 00:11:44.307 "base_bdevs_list": [ 00:11:44.307 { 00:11:44.307 "name": "pt1", 00:11:44.307 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:44.307 "is_configured": true, 00:11:44.307 "data_offset": 2048, 00:11:44.307 "data_size": 63488 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "name": "pt2", 00:11:44.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.307 "is_configured": true, 00:11:44.307 "data_offset": 2048, 00:11:44.307 "data_size": 63488 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "name": "pt3", 00:11:44.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.307 "is_configured": true, 00:11:44.307 "data_offset": 2048, 00:11:44.307 "data_size": 63488 00:11:44.307 }, 00:11:44.307 { 00:11:44.307 "name": "pt4", 00:11:44.307 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.307 "is_configured": true, 00:11:44.307 "data_offset": 2048, 00:11:44.307 "data_size": 63488 00:11:44.307 } 00:11:44.307 ] 00:11:44.307 } 00:11:44.307 } 00:11:44.307 }' 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:44.307 pt2 00:11:44.307 pt3 00:11:44.307 pt4' 00:11:44.307 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.565 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.566 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 [2024-11-20 14:28:45.634567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=98ef2736-31da-4bef-a570-d439dbdd3d90 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 98ef2736-31da-4bef-a570-d439dbdd3d90 ']' 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 [2024-11-20 14:28:45.674190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.824 [2024-11-20 14:28:45.674231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.824 [2024-11-20 14:28:45.674351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.824 [2024-11-20 14:28:45.674449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.824 [2024-11-20 14:28:45.674473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:44.824 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.825 [2024-11-20 14:28:45.814263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:44.825 [2024-11-20 14:28:45.816888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:44.825 [2024-11-20 14:28:45.817112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:44.825 [2024-11-20 14:28:45.817208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:44.825 [2024-11-20 14:28:45.817291] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:44.825 [2024-11-20 14:28:45.817380] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:44.825 [2024-11-20 14:28:45.817415] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:44.825 [2024-11-20 14:28:45.817448] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:44.825 [2024-11-20 14:28:45.817470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.825 [2024-11-20 14:28:45.817489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:44.825 request: 00:11:44.825 { 00:11:44.825 "name": "raid_bdev1", 00:11:44.825 "raid_level": "raid0", 00:11:44.825 "base_bdevs": [ 00:11:44.825 "malloc1", 00:11:44.825 "malloc2", 00:11:44.825 "malloc3", 00:11:44.825 "malloc4" 00:11:44.825 ], 00:11:44.825 "strip_size_kb": 64, 00:11:44.825 "superblock": false, 00:11:44.825 "method": "bdev_raid_create", 00:11:44.825 "req_id": 1 00:11:44.825 } 00:11:44.825 Got JSON-RPC error response 00:11:44.825 response: 00:11:44.825 { 00:11:44.825 "code": -17, 00:11:44.825 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:44.825 } 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.825 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.825 [2024-11-20 14:28:45.878325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.825 [2024-11-20 14:28:45.878569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.825 [2024-11-20 14:28:45.878765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:44.825 [2024-11-20 14:28:45.878956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.084 [2024-11-20 14:28:45.882125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.084 [2024-11-20 14:28:45.882303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.084 [2024-11-20 14:28:45.882602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:45.084 [2024-11-20 14:28:45.882855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.084 pt1 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.084 "name": "raid_bdev1", 00:11:45.084 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:45.084 "strip_size_kb": 64, 00:11:45.084 "state": "configuring", 00:11:45.084 "raid_level": "raid0", 00:11:45.084 "superblock": true, 00:11:45.084 "num_base_bdevs": 4, 00:11:45.084 "num_base_bdevs_discovered": 1, 00:11:45.084 "num_base_bdevs_operational": 4, 00:11:45.084 "base_bdevs_list": [ 00:11:45.084 { 00:11:45.084 "name": "pt1", 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.084 "is_configured": true, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 }, 00:11:45.084 { 00:11:45.084 "name": null, 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.084 "is_configured": false, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 }, 00:11:45.084 { 00:11:45.084 "name": null, 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.084 "is_configured": false, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 }, 00:11:45.084 { 00:11:45.084 "name": null, 00:11:45.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.084 "is_configured": false, 00:11:45.084 "data_offset": 2048, 00:11:45.084 "data_size": 63488 00:11:45.084 } 00:11:45.084 ] 00:11:45.084 }' 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.084 14:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.342 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:45.342 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.343 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.343 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.600 [2024-11-20 14:28:46.398859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.600 [2024-11-20 14:28:46.398984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.601 [2024-11-20 14:28:46.399017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:45.601 [2024-11-20 14:28:46.399037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.601 [2024-11-20 14:28:46.399657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.601 [2024-11-20 14:28:46.399695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.601 [2024-11-20 14:28:46.399847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:45.601 [2024-11-20 14:28:46.400015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.601 pt2 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.601 [2024-11-20 14:28:46.406857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.601 "name": "raid_bdev1", 00:11:45.601 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:45.601 "strip_size_kb": 64, 00:11:45.601 "state": "configuring", 00:11:45.601 "raid_level": "raid0", 00:11:45.601 "superblock": true, 00:11:45.601 "num_base_bdevs": 4, 00:11:45.601 "num_base_bdevs_discovered": 1, 00:11:45.601 "num_base_bdevs_operational": 4, 00:11:45.601 "base_bdevs_list": [ 00:11:45.601 { 00:11:45.601 "name": "pt1", 00:11:45.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.601 "is_configured": true, 00:11:45.601 "data_offset": 2048, 00:11:45.601 "data_size": 63488 00:11:45.601 }, 00:11:45.601 { 00:11:45.601 "name": null, 00:11:45.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.601 "is_configured": false, 00:11:45.601 "data_offset": 0, 00:11:45.601 "data_size": 63488 00:11:45.601 }, 00:11:45.601 { 00:11:45.601 "name": null, 00:11:45.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.601 "is_configured": false, 00:11:45.601 "data_offset": 2048, 00:11:45.601 "data_size": 63488 00:11:45.601 }, 00:11:45.601 { 00:11:45.601 "name": null, 00:11:45.601 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.601 "is_configured": false, 00:11:45.601 "data_offset": 2048, 00:11:45.601 "data_size": 63488 00:11:45.601 } 00:11:45.601 ] 00:11:45.601 }' 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.601 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.166 [2024-11-20 14:28:46.975154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:46.166 [2024-11-20 14:28:46.975283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.166 [2024-11-20 14:28:46.975328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:46.166 [2024-11-20 14:28:46.975345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.166 [2024-11-20 14:28:46.976123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.166 [2024-11-20 14:28:46.976169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:46.166 [2024-11-20 14:28:46.976330] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:46.166 [2024-11-20 14:28:46.976375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:46.166 pt2 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.166 [2024-11-20 14:28:46.987123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:46.166 [2024-11-20 14:28:46.987424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.166 [2024-11-20 14:28:46.987602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:46.166 [2024-11-20 14:28:46.987777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.166 [2024-11-20 14:28:46.988706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.166 [2024-11-20 14:28:46.988887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:46.166 [2024-11-20 14:28:46.989184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:46.166 [2024-11-20 14:28:46.989357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:46.166 pt3 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.166 14:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.166 [2024-11-20 14:28:46.999074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:46.166 [2024-11-20 14:28:46.999366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.166 [2024-11-20 14:28:46.999574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:46.166 [2024-11-20 14:28:46.999605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.166 [2024-11-20 14:28:47.000423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.166 [2024-11-20 14:28:47.000470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:46.166 [2024-11-20 14:28:47.000615] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:46.166 [2024-11-20 14:28:47.000686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:46.166 [2024-11-20 14:28:47.000930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.166 [2024-11-20 14:28:47.000963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:46.166 [2024-11-20 14:28:47.001320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:46.166 [2024-11-20 14:28:47.001577] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.166 [2024-11-20 14:28:47.001610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:46.166 [2024-11-20 14:28:47.001863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.166 pt4 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.166 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.166 "name": "raid_bdev1", 00:11:46.166 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:46.167 "strip_size_kb": 64, 00:11:46.167 "state": "online", 00:11:46.167 "raid_level": "raid0", 00:11:46.167 "superblock": true, 00:11:46.167 "num_base_bdevs": 4, 00:11:46.167 "num_base_bdevs_discovered": 4, 00:11:46.167 "num_base_bdevs_operational": 4, 00:11:46.167 "base_bdevs_list": [ 00:11:46.167 { 00:11:46.167 "name": "pt1", 00:11:46.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.167 "is_configured": true, 00:11:46.167 "data_offset": 2048, 00:11:46.167 "data_size": 63488 00:11:46.167 }, 00:11:46.167 { 00:11:46.167 "name": "pt2", 00:11:46.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.167 "is_configured": true, 00:11:46.167 "data_offset": 2048, 00:11:46.167 "data_size": 63488 00:11:46.167 }, 00:11:46.167 { 00:11:46.167 "name": "pt3", 00:11:46.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.167 "is_configured": true, 00:11:46.167 "data_offset": 2048, 00:11:46.167 "data_size": 63488 00:11:46.167 }, 00:11:46.167 { 00:11:46.167 "name": "pt4", 00:11:46.167 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.167 "is_configured": true, 00:11:46.167 "data_offset": 2048, 00:11:46.167 "data_size": 63488 00:11:46.167 } 00:11:46.167 ] 00:11:46.167 }' 00:11:46.167 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.167 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.733 [2024-11-20 14:28:47.563664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.733 "name": "raid_bdev1", 00:11:46.733 "aliases": [ 00:11:46.733 "98ef2736-31da-4bef-a570-d439dbdd3d90" 00:11:46.733 ], 00:11:46.733 "product_name": "Raid Volume", 00:11:46.733 "block_size": 512, 00:11:46.733 "num_blocks": 253952, 00:11:46.733 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:46.733 "assigned_rate_limits": { 00:11:46.733 "rw_ios_per_sec": 0, 00:11:46.733 "rw_mbytes_per_sec": 0, 00:11:46.733 "r_mbytes_per_sec": 0, 00:11:46.733 "w_mbytes_per_sec": 0 00:11:46.733 }, 00:11:46.733 "claimed": false, 00:11:46.733 "zoned": false, 00:11:46.733 "supported_io_types": { 00:11:46.733 "read": true, 00:11:46.733 "write": true, 00:11:46.733 "unmap": true, 00:11:46.733 "flush": true, 00:11:46.733 "reset": true, 00:11:46.733 "nvme_admin": false, 00:11:46.733 "nvme_io": false, 00:11:46.733 "nvme_io_md": false, 00:11:46.733 "write_zeroes": true, 00:11:46.733 "zcopy": false, 00:11:46.733 "get_zone_info": false, 00:11:46.733 "zone_management": false, 00:11:46.733 "zone_append": false, 00:11:46.733 "compare": false, 00:11:46.733 "compare_and_write": false, 00:11:46.733 "abort": false, 00:11:46.733 "seek_hole": false, 00:11:46.733 "seek_data": false, 00:11:46.733 "copy": false, 00:11:46.733 "nvme_iov_md": false 00:11:46.733 }, 00:11:46.733 "memory_domains": [ 00:11:46.733 { 00:11:46.733 "dma_device_id": "system", 00:11:46.733 "dma_device_type": 1 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.733 "dma_device_type": 2 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "dma_device_id": "system", 00:11:46.733 "dma_device_type": 1 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.733 "dma_device_type": 2 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "dma_device_id": "system", 00:11:46.733 "dma_device_type": 1 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.733 "dma_device_type": 2 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "dma_device_id": "system", 00:11:46.733 "dma_device_type": 1 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.733 "dma_device_type": 2 00:11:46.733 } 00:11:46.733 ], 00:11:46.733 "driver_specific": { 00:11:46.733 "raid": { 00:11:46.733 "uuid": "98ef2736-31da-4bef-a570-d439dbdd3d90", 00:11:46.733 "strip_size_kb": 64, 00:11:46.733 "state": "online", 00:11:46.733 "raid_level": "raid0", 00:11:46.733 "superblock": true, 00:11:46.733 "num_base_bdevs": 4, 00:11:46.733 "num_base_bdevs_discovered": 4, 00:11:46.733 "num_base_bdevs_operational": 4, 00:11:46.733 "base_bdevs_list": [ 00:11:46.733 { 00:11:46.733 "name": "pt1", 00:11:46.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.733 "is_configured": true, 00:11:46.733 "data_offset": 2048, 00:11:46.733 "data_size": 63488 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "name": "pt2", 00:11:46.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.733 "is_configured": true, 00:11:46.733 "data_offset": 2048, 00:11:46.733 "data_size": 63488 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "name": "pt3", 00:11:46.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.733 "is_configured": true, 00:11:46.733 "data_offset": 2048, 00:11:46.733 "data_size": 63488 00:11:46.733 }, 00:11:46.733 { 00:11:46.733 "name": "pt4", 00:11:46.733 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:46.733 "is_configured": true, 00:11:46.733 "data_offset": 2048, 00:11:46.733 "data_size": 63488 00:11:46.733 } 00:11:46.733 ] 00:11:46.733 } 00:11:46.733 } 00:11:46.733 }' 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.733 pt2 00:11:46.733 pt3 00:11:46.733 pt4' 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.733 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.734 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.992 [2024-11-20 14:28:47.931711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 98ef2736-31da-4bef-a570-d439dbdd3d90 '!=' 98ef2736-31da-4bef-a570-d439dbdd3d90 ']' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70917 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70917 ']' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70917 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.992 14:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70917 00:11:46.992 killing process with pid 70917 00:11:46.992 14:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.992 14:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.992 14:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70917' 00:11:46.992 14:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70917 00:11:46.992 [2024-11-20 14:28:48.012235] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.992 14:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70917 00:11:46.993 [2024-11-20 14:28:48.012749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.993 [2024-11-20 14:28:48.012871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.993 [2024-11-20 14:28:48.012890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:47.588 [2024-11-20 14:28:48.386766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:48.520 14:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:48.520 00:11:48.520 real 0m6.050s 00:11:48.520 user 0m9.050s 00:11:48.520 sys 0m0.904s 00:11:48.520 14:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.520 ************************************ 00:11:48.520 END TEST raid_superblock_test 00:11:48.520 ************************************ 00:11:48.520 14:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.520 14:28:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:48.520 14:28:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:48.520 14:28:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.520 14:28:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.520 ************************************ 00:11:48.520 START TEST raid_read_error_test 00:11:48.520 ************************************ 00:11:48.520 14:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:48.520 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.er19h8QK9O 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71183 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71183 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71183 ']' 00:11:48.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.521 14:28:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.779 [2024-11-20 14:28:49.629735] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:48.779 [2024-11-20 14:28:49.629935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71183 ] 00:11:48.779 [2024-11-20 14:28:49.827570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.037 [2024-11-20 14:28:49.987837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.294 [2024-11-20 14:28:50.218285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.294 [2024-11-20 14:28:50.218334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.553 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.553 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:49.553 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.553 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:49.553 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.553 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.811 BaseBdev1_malloc 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.811 true 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.811 [2024-11-20 14:28:50.660920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:49.811 [2024-11-20 14:28:50.660992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.811 [2024-11-20 14:28:50.661023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:49.811 [2024-11-20 14:28:50.661042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.811 [2024-11-20 14:28:50.663937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.811 [2024-11-20 14:28:50.663989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:49.811 BaseBdev1 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.811 BaseBdev2_malloc 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:49.811 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.812 true 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.812 [2024-11-20 14:28:50.721221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:49.812 [2024-11-20 14:28:50.722271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.812 [2024-11-20 14:28:50.722309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:49.812 [2024-11-20 14:28:50.722329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.812 [2024-11-20 14:28:50.725118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.812 [2024-11-20 14:28:50.725168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:49.812 BaseBdev2 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.812 BaseBdev3_malloc 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.812 true 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.812 [2024-11-20 14:28:50.814672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:49.812 [2024-11-20 14:28:50.814800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.812 [2024-11-20 14:28:50.814853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:49.812 [2024-11-20 14:28:50.814894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.812 [2024-11-20 14:28:50.819405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.812 [2024-11-20 14:28:50.819476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:49.812 BaseBdev3 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.812 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.070 BaseBdev4_malloc 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.070 true 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.070 [2024-11-20 14:28:50.897254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:50.070 [2024-11-20 14:28:50.897340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.070 [2024-11-20 14:28:50.897376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:50.070 [2024-11-20 14:28:50.897399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.070 [2024-11-20 14:28:50.900901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.070 BaseBdev4 00:11:50.070 [2024-11-20 14:28:50.901109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.070 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.070 [2024-11-20 14:28:50.905415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.070 [2024-11-20 14:28:50.908428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.071 [2024-11-20 14:28:50.908731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.071 [2024-11-20 14:28:50.908868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:50.071 [2024-11-20 14:28:50.909266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:50.071 [2024-11-20 14:28:50.909296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:50.071 [2024-11-20 14:28:50.909700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:50.071 [2024-11-20 14:28:50.909969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:50.071 [2024-11-20 14:28:50.909992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:50.071 [2024-11-20 14:28:50.910292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.071 "name": "raid_bdev1", 00:11:50.071 "uuid": "cc1e39ee-b306-4e34-96fa-c3fb30caa589", 00:11:50.071 "strip_size_kb": 64, 00:11:50.071 "state": "online", 00:11:50.071 "raid_level": "raid0", 00:11:50.071 "superblock": true, 00:11:50.071 "num_base_bdevs": 4, 00:11:50.071 "num_base_bdevs_discovered": 4, 00:11:50.071 "num_base_bdevs_operational": 4, 00:11:50.071 "base_bdevs_list": [ 00:11:50.071 { 00:11:50.071 "name": "BaseBdev1", 00:11:50.071 "uuid": "d8b4d26d-acb4-5653-9055-b1638d6d224b", 00:11:50.071 "is_configured": true, 00:11:50.071 "data_offset": 2048, 00:11:50.071 "data_size": 63488 00:11:50.071 }, 00:11:50.071 { 00:11:50.071 "name": "BaseBdev2", 00:11:50.071 "uuid": "53e350fa-e613-5eaf-a41f-f62ff1b0b025", 00:11:50.071 "is_configured": true, 00:11:50.071 "data_offset": 2048, 00:11:50.071 "data_size": 63488 00:11:50.071 }, 00:11:50.071 { 00:11:50.071 "name": "BaseBdev3", 00:11:50.071 "uuid": "9e52cf93-065b-5a7a-ab0c-48efeb850d17", 00:11:50.071 "is_configured": true, 00:11:50.071 "data_offset": 2048, 00:11:50.071 "data_size": 63488 00:11:50.071 }, 00:11:50.071 { 00:11:50.071 "name": "BaseBdev4", 00:11:50.071 "uuid": "1d4b32fc-6669-5715-bf2f-e7707005d15f", 00:11:50.071 "is_configured": true, 00:11:50.071 "data_offset": 2048, 00:11:50.071 "data_size": 63488 00:11:50.071 } 00:11:50.071 ] 00:11:50.071 }' 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.071 14:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.638 14:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:50.638 14:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:50.638 [2024-11-20 14:28:51.635180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.573 "name": "raid_bdev1", 00:11:51.573 "uuid": "cc1e39ee-b306-4e34-96fa-c3fb30caa589", 00:11:51.573 "strip_size_kb": 64, 00:11:51.573 "state": "online", 00:11:51.573 "raid_level": "raid0", 00:11:51.573 "superblock": true, 00:11:51.573 "num_base_bdevs": 4, 00:11:51.573 "num_base_bdevs_discovered": 4, 00:11:51.573 "num_base_bdevs_operational": 4, 00:11:51.573 "base_bdevs_list": [ 00:11:51.573 { 00:11:51.573 "name": "BaseBdev1", 00:11:51.573 "uuid": "d8b4d26d-acb4-5653-9055-b1638d6d224b", 00:11:51.573 "is_configured": true, 00:11:51.573 "data_offset": 2048, 00:11:51.573 "data_size": 63488 00:11:51.573 }, 00:11:51.573 { 00:11:51.573 "name": "BaseBdev2", 00:11:51.573 "uuid": "53e350fa-e613-5eaf-a41f-f62ff1b0b025", 00:11:51.573 "is_configured": true, 00:11:51.573 "data_offset": 2048, 00:11:51.573 "data_size": 63488 00:11:51.573 }, 00:11:51.573 { 00:11:51.573 "name": "BaseBdev3", 00:11:51.573 "uuid": "9e52cf93-065b-5a7a-ab0c-48efeb850d17", 00:11:51.573 "is_configured": true, 00:11:51.573 "data_offset": 2048, 00:11:51.573 "data_size": 63488 00:11:51.573 }, 00:11:51.573 { 00:11:51.573 "name": "BaseBdev4", 00:11:51.573 "uuid": "1d4b32fc-6669-5715-bf2f-e7707005d15f", 00:11:51.573 "is_configured": true, 00:11:51.573 "data_offset": 2048, 00:11:51.573 "data_size": 63488 00:11:51.573 } 00:11:51.573 ] 00:11:51.573 }' 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.573 14:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.139 [2024-11-20 14:28:53.045366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.139 [2024-11-20 14:28:53.045428] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.139 [2024-11-20 14:28:53.049773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.139 [2024-11-20 14:28:53.050141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.139 [2024-11-20 14:28:53.050391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.139 { 00:11:52.139 "results": [ 00:11:52.139 { 00:11:52.139 "job": "raid_bdev1", 00:11:52.139 "core_mask": "0x1", 00:11:52.139 "workload": "randrw", 00:11:52.139 "percentage": 50, 00:11:52.139 "status": "finished", 00:11:52.139 "queue_depth": 1, 00:11:52.139 "io_size": 131072, 00:11:52.139 "runtime": 1.407507, 00:11:52.139 "iops": 10147.729283051523, 00:11:52.139 "mibps": 1268.4661603814404, 00:11:52.139 "io_failed": 1, 00:11:52.139 "io_timeout": 0, 00:11:52.139 "avg_latency_us": 138.05800870649932, 00:11:52.139 "min_latency_us": 43.054545454545455, 00:11:52.139 "max_latency_us": 1854.370909090909 00:11:52.139 } 00:11:52.139 ], 00:11:52.139 "core_count": 1 00:11:52.139 } 00:11:52.139 [2024-11-20 14:28:53.050706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71183 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71183 ']' 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71183 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71183 00:11:52.139 killing process with pid 71183 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71183' 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71183 00:11:52.139 14:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71183 00:11:52.139 [2024-11-20 14:28:53.097076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:52.397 [2024-11-20 14:28:53.438963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.er19h8QK9O 00:11:53.835 ************************************ 00:11:53.835 END TEST raid_read_error_test 00:11:53.835 ************************************ 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:53.835 00:11:53.835 real 0m5.049s 00:11:53.835 user 0m6.316s 00:11:53.835 sys 0m0.596s 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.835 14:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.835 14:28:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:53.835 14:28:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:53.835 14:28:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.835 14:28:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.835 ************************************ 00:11:53.835 START TEST raid_write_error_test 00:11:53.835 ************************************ 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:53.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0lOHgDBH6W 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71334 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71334 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71334 ']' 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.835 14:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.835 [2024-11-20 14:28:54.747719] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:53.835 [2024-11-20 14:28:54.747911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71334 ] 00:11:54.093 [2024-11-20 14:28:54.938114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.093 [2024-11-20 14:28:55.094403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.351 [2024-11-20 14:28:55.332367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.351 [2024-11-20 14:28:55.332652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 BaseBdev1_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 true 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 [2024-11-20 14:28:55.748910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:54.918 [2024-11-20 14:28:55.749008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.918 [2024-11-20 14:28:55.749063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:54.918 [2024-11-20 14:28:55.749091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.918 [2024-11-20 14:28:55.752512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.918 [2024-11-20 14:28:55.752778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:54.918 BaseBdev1 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 BaseBdev2_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 true 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 [2024-11-20 14:28:55.805540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:54.918 [2024-11-20 14:28:55.805647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.918 [2024-11-20 14:28:55.805696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:54.918 [2024-11-20 14:28:55.805725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.918 [2024-11-20 14:28:55.808901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.918 [2024-11-20 14:28:55.808960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:54.918 BaseBdev2 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 BaseBdev3_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 true 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 [2024-11-20 14:28:55.875911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:54.918 [2024-11-20 14:28:55.875995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.918 [2024-11-20 14:28:55.876042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:54.918 [2024-11-20 14:28:55.876070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.918 [2024-11-20 14:28:55.879244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.918 [2024-11-20 14:28:55.879303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:54.918 BaseBdev3 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 BaseBdev4_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.918 true 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.918 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.919 [2024-11-20 14:28:55.936750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:54.919 [2024-11-20 14:28:55.936829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.919 [2024-11-20 14:28:55.936874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:54.919 [2024-11-20 14:28:55.936901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.919 [2024-11-20 14:28:55.940030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.919 [2024-11-20 14:28:55.940092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:54.919 BaseBdev4 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.919 [2024-11-20 14:28:55.949040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.919 [2024-11-20 14:28:55.951759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.919 [2024-11-20 14:28:55.951878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.919 [2024-11-20 14:28:55.951986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.919 [2024-11-20 14:28:55.952311] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:54.919 [2024-11-20 14:28:55.952338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:54.919 [2024-11-20 14:28:55.952732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:54.919 [2024-11-20 14:28:55.952972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:54.919 [2024-11-20 14:28:55.952992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:54.919 [2024-11-20 14:28:55.953283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.919 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.177 14:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.177 14:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.177 "name": "raid_bdev1", 00:11:55.177 "uuid": "ce1ed9f0-cc9f-4523-8395-e550b21820a8", 00:11:55.177 "strip_size_kb": 64, 00:11:55.177 "state": "online", 00:11:55.177 "raid_level": "raid0", 00:11:55.177 "superblock": true, 00:11:55.177 "num_base_bdevs": 4, 00:11:55.177 "num_base_bdevs_discovered": 4, 00:11:55.177 "num_base_bdevs_operational": 4, 00:11:55.177 "base_bdevs_list": [ 00:11:55.177 { 00:11:55.177 "name": "BaseBdev1", 00:11:55.177 "uuid": "8a199b4e-5f47-5043-8ce2-1c663f598ac4", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 }, 00:11:55.177 { 00:11:55.177 "name": "BaseBdev2", 00:11:55.177 "uuid": "81779a3d-1a97-5b4f-83fb-909e3399bd63", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 }, 00:11:55.177 { 00:11:55.177 "name": "BaseBdev3", 00:11:55.177 "uuid": "f490c6fa-34bf-582e-b82c-32de75c6a78d", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 }, 00:11:55.177 { 00:11:55.177 "name": "BaseBdev4", 00:11:55.177 "uuid": "0ecd7105-1ed5-5bde-8941-e2b6b8f484c6", 00:11:55.177 "is_configured": true, 00:11:55.177 "data_offset": 2048, 00:11:55.177 "data_size": 63488 00:11:55.177 } 00:11:55.177 ] 00:11:55.177 }' 00:11:55.177 14:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.177 14:28:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.436 14:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:55.436 14:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:55.695 [2024-11-20 14:28:56.594906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.685 "name": "raid_bdev1", 00:11:56.685 "uuid": "ce1ed9f0-cc9f-4523-8395-e550b21820a8", 00:11:56.685 "strip_size_kb": 64, 00:11:56.685 "state": "online", 00:11:56.685 "raid_level": "raid0", 00:11:56.685 "superblock": true, 00:11:56.685 "num_base_bdevs": 4, 00:11:56.685 "num_base_bdevs_discovered": 4, 00:11:56.685 "num_base_bdevs_operational": 4, 00:11:56.685 "base_bdevs_list": [ 00:11:56.685 { 00:11:56.685 "name": "BaseBdev1", 00:11:56.685 "uuid": "8a199b4e-5f47-5043-8ce2-1c663f598ac4", 00:11:56.685 "is_configured": true, 00:11:56.685 "data_offset": 2048, 00:11:56.685 "data_size": 63488 00:11:56.685 }, 00:11:56.685 { 00:11:56.685 "name": "BaseBdev2", 00:11:56.685 "uuid": "81779a3d-1a97-5b4f-83fb-909e3399bd63", 00:11:56.685 "is_configured": true, 00:11:56.685 "data_offset": 2048, 00:11:56.685 "data_size": 63488 00:11:56.685 }, 00:11:56.685 { 00:11:56.685 "name": "BaseBdev3", 00:11:56.685 "uuid": "f490c6fa-34bf-582e-b82c-32de75c6a78d", 00:11:56.685 "is_configured": true, 00:11:56.685 "data_offset": 2048, 00:11:56.685 "data_size": 63488 00:11:56.685 }, 00:11:56.685 { 00:11:56.685 "name": "BaseBdev4", 00:11:56.685 "uuid": "0ecd7105-1ed5-5bde-8941-e2b6b8f484c6", 00:11:56.685 "is_configured": true, 00:11:56.685 "data_offset": 2048, 00:11:56.685 "data_size": 63488 00:11:56.685 } 00:11:56.685 ] 00:11:56.685 }' 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.685 14:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.252 [2024-11-20 14:28:58.018532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.252 [2024-11-20 14:28:58.018823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.252 [2024-11-20 14:28:58.022372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.252 [2024-11-20 14:28:58.022613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.252 [2024-11-20 14:28:58.022721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.252 [2024-11-20 14:28:58.022746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:57.252 { 00:11:57.252 "results": [ 00:11:57.252 { 00:11:57.252 "job": "raid_bdev1", 00:11:57.252 "core_mask": "0x1", 00:11:57.252 "workload": "randrw", 00:11:57.252 "percentage": 50, 00:11:57.252 "status": "finished", 00:11:57.252 "queue_depth": 1, 00:11:57.252 "io_size": 131072, 00:11:57.252 "runtime": 1.421427, 00:11:57.252 "iops": 10332.574237016745, 00:11:57.252 "mibps": 1291.571779627093, 00:11:57.252 "io_failed": 1, 00:11:57.252 "io_timeout": 0, 00:11:57.252 "avg_latency_us": 135.348504654387, 00:11:57.252 "min_latency_us": 43.054545454545455, 00:11:57.252 "max_latency_us": 1846.9236363636364 00:11:57.252 } 00:11:57.252 ], 00:11:57.252 "core_count": 1 00:11:57.252 } 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71334 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71334 ']' 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71334 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71334 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71334' 00:11:57.252 killing process with pid 71334 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71334 00:11:57.252 14:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71334 00:11:57.252 [2024-11-20 14:28:58.058743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.511 [2024-11-20 14:28:58.360989] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.447 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0lOHgDBH6W 00:11:58.447 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:58.447 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:58.706 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:58.706 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:58.706 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.706 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.706 14:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:58.706 00:11:58.706 real 0m4.907s 00:11:58.706 user 0m6.013s 00:11:58.706 sys 0m0.629s 00:11:58.706 ************************************ 00:11:58.706 END TEST raid_write_error_test 00:11:58.706 ************************************ 00:11:58.706 14:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.706 14:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.706 14:28:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:58.706 14:28:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:58.706 14:28:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:58.706 14:28:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.706 14:28:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.706 ************************************ 00:11:58.706 START TEST raid_state_function_test 00:11:58.706 ************************************ 00:11:58.706 14:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71478 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71478' 00:11:58.707 Process raid pid: 71478 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71478 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71478 ']' 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.707 14:28:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.707 [2024-11-20 14:28:59.688582] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:11:58.707 [2024-11-20 14:28:59.688787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.966 [2024-11-20 14:28:59.883100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.224 [2024-11-20 14:29:00.061569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.481 [2024-11-20 14:29:00.300010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.481 [2024-11-20 14:29:00.300077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.740 [2024-11-20 14:29:00.691916] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.740 [2024-11-20 14:29:00.691990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.740 [2024-11-20 14:29:00.692009] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.740 [2024-11-20 14:29:00.692026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.740 [2024-11-20 14:29:00.692036] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.740 [2024-11-20 14:29:00.692051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.740 [2024-11-20 14:29:00.692061] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:59.740 [2024-11-20 14:29:00.692076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.740 "name": "Existed_Raid", 00:11:59.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.740 "strip_size_kb": 64, 00:11:59.740 "state": "configuring", 00:11:59.740 "raid_level": "concat", 00:11:59.740 "superblock": false, 00:11:59.740 "num_base_bdevs": 4, 00:11:59.740 "num_base_bdevs_discovered": 0, 00:11:59.740 "num_base_bdevs_operational": 4, 00:11:59.740 "base_bdevs_list": [ 00:11:59.740 { 00:11:59.740 "name": "BaseBdev1", 00:11:59.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.740 "is_configured": false, 00:11:59.740 "data_offset": 0, 00:11:59.740 "data_size": 0 00:11:59.740 }, 00:11:59.740 { 00:11:59.740 "name": "BaseBdev2", 00:11:59.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.740 "is_configured": false, 00:11:59.740 "data_offset": 0, 00:11:59.740 "data_size": 0 00:11:59.740 }, 00:11:59.740 { 00:11:59.740 "name": "BaseBdev3", 00:11:59.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.740 "is_configured": false, 00:11:59.740 "data_offset": 0, 00:11:59.740 "data_size": 0 00:11:59.740 }, 00:11:59.740 { 00:11:59.740 "name": "BaseBdev4", 00:11:59.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.740 "is_configured": false, 00:11:59.740 "data_offset": 0, 00:11:59.740 "data_size": 0 00:11:59.740 } 00:11:59.740 ] 00:11:59.740 }' 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.740 14:29:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.308 [2024-11-20 14:29:01.195997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.308 [2024-11-20 14:29:01.196056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.308 [2024-11-20 14:29:01.203979] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.308 [2024-11-20 14:29:01.204040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.308 [2024-11-20 14:29:01.204059] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.308 [2024-11-20 14:29:01.204077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.308 [2024-11-20 14:29:01.204093] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.308 [2024-11-20 14:29:01.204120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.308 [2024-11-20 14:29:01.204136] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.308 [2024-11-20 14:29:01.204152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.308 [2024-11-20 14:29:01.251312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.308 BaseBdev1 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.308 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.308 [ 00:12:00.308 { 00:12:00.308 "name": "BaseBdev1", 00:12:00.308 "aliases": [ 00:12:00.308 "406ef7e2-74e9-46e6-a3df-79362d142f69" 00:12:00.308 ], 00:12:00.308 "product_name": "Malloc disk", 00:12:00.308 "block_size": 512, 00:12:00.308 "num_blocks": 65536, 00:12:00.308 "uuid": "406ef7e2-74e9-46e6-a3df-79362d142f69", 00:12:00.308 "assigned_rate_limits": { 00:12:00.308 "rw_ios_per_sec": 0, 00:12:00.308 "rw_mbytes_per_sec": 0, 00:12:00.308 "r_mbytes_per_sec": 0, 00:12:00.308 "w_mbytes_per_sec": 0 00:12:00.308 }, 00:12:00.308 "claimed": true, 00:12:00.308 "claim_type": "exclusive_write", 00:12:00.308 "zoned": false, 00:12:00.308 "supported_io_types": { 00:12:00.308 "read": true, 00:12:00.308 "write": true, 00:12:00.308 "unmap": true, 00:12:00.308 "flush": true, 00:12:00.308 "reset": true, 00:12:00.308 "nvme_admin": false, 00:12:00.308 "nvme_io": false, 00:12:00.308 "nvme_io_md": false, 00:12:00.308 "write_zeroes": true, 00:12:00.308 "zcopy": true, 00:12:00.308 "get_zone_info": false, 00:12:00.309 "zone_management": false, 00:12:00.309 "zone_append": false, 00:12:00.309 "compare": false, 00:12:00.309 "compare_and_write": false, 00:12:00.309 "abort": true, 00:12:00.309 "seek_hole": false, 00:12:00.309 "seek_data": false, 00:12:00.309 "copy": true, 00:12:00.309 "nvme_iov_md": false 00:12:00.309 }, 00:12:00.309 "memory_domains": [ 00:12:00.309 { 00:12:00.309 "dma_device_id": "system", 00:12:00.309 "dma_device_type": 1 00:12:00.309 }, 00:12:00.309 { 00:12:00.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.309 "dma_device_type": 2 00:12:00.309 } 00:12:00.309 ], 00:12:00.309 "driver_specific": {} 00:12:00.309 } 00:12:00.309 ] 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.309 "name": "Existed_Raid", 00:12:00.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.309 "strip_size_kb": 64, 00:12:00.309 "state": "configuring", 00:12:00.309 "raid_level": "concat", 00:12:00.309 "superblock": false, 00:12:00.309 "num_base_bdevs": 4, 00:12:00.309 "num_base_bdevs_discovered": 1, 00:12:00.309 "num_base_bdevs_operational": 4, 00:12:00.309 "base_bdevs_list": [ 00:12:00.309 { 00:12:00.309 "name": "BaseBdev1", 00:12:00.309 "uuid": "406ef7e2-74e9-46e6-a3df-79362d142f69", 00:12:00.309 "is_configured": true, 00:12:00.309 "data_offset": 0, 00:12:00.309 "data_size": 65536 00:12:00.309 }, 00:12:00.309 { 00:12:00.309 "name": "BaseBdev2", 00:12:00.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.309 "is_configured": false, 00:12:00.309 "data_offset": 0, 00:12:00.309 "data_size": 0 00:12:00.309 }, 00:12:00.309 { 00:12:00.309 "name": "BaseBdev3", 00:12:00.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.309 "is_configured": false, 00:12:00.309 "data_offset": 0, 00:12:00.309 "data_size": 0 00:12:00.309 }, 00:12:00.309 { 00:12:00.309 "name": "BaseBdev4", 00:12:00.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.309 "is_configured": false, 00:12:00.309 "data_offset": 0, 00:12:00.309 "data_size": 0 00:12:00.309 } 00:12:00.309 ] 00:12:00.309 }' 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.309 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.963 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:00.963 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.963 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.963 [2024-11-20 14:29:01.811534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.963 [2024-11-20 14:29:01.811615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.964 [2024-11-20 14:29:01.819559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.964 [2024-11-20 14:29:01.822226] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.964 [2024-11-20 14:29:01.822283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.964 [2024-11-20 14:29:01.822301] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.964 [2024-11-20 14:29:01.822325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.964 [2024-11-20 14:29:01.822337] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.964 [2024-11-20 14:29:01.822351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.964 "name": "Existed_Raid", 00:12:00.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.964 "strip_size_kb": 64, 00:12:00.964 "state": "configuring", 00:12:00.964 "raid_level": "concat", 00:12:00.964 "superblock": false, 00:12:00.964 "num_base_bdevs": 4, 00:12:00.964 "num_base_bdevs_discovered": 1, 00:12:00.964 "num_base_bdevs_operational": 4, 00:12:00.964 "base_bdevs_list": [ 00:12:00.964 { 00:12:00.964 "name": "BaseBdev1", 00:12:00.964 "uuid": "406ef7e2-74e9-46e6-a3df-79362d142f69", 00:12:00.964 "is_configured": true, 00:12:00.964 "data_offset": 0, 00:12:00.964 "data_size": 65536 00:12:00.964 }, 00:12:00.964 { 00:12:00.964 "name": "BaseBdev2", 00:12:00.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.964 "is_configured": false, 00:12:00.964 "data_offset": 0, 00:12:00.964 "data_size": 0 00:12:00.964 }, 00:12:00.964 { 00:12:00.964 "name": "BaseBdev3", 00:12:00.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.964 "is_configured": false, 00:12:00.964 "data_offset": 0, 00:12:00.964 "data_size": 0 00:12:00.964 }, 00:12:00.964 { 00:12:00.964 "name": "BaseBdev4", 00:12:00.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.964 "is_configured": false, 00:12:00.964 "data_offset": 0, 00:12:00.964 "data_size": 0 00:12:00.964 } 00:12:00.964 ] 00:12:00.964 }' 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.964 14:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.531 [2024-11-20 14:29:02.354693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.531 BaseBdev2 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.531 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.532 [ 00:12:01.532 { 00:12:01.532 "name": "BaseBdev2", 00:12:01.532 "aliases": [ 00:12:01.532 "56e2c2d3-3b55-4b85-ada1-75c2dfdeb4da" 00:12:01.532 ], 00:12:01.532 "product_name": "Malloc disk", 00:12:01.532 "block_size": 512, 00:12:01.532 "num_blocks": 65536, 00:12:01.532 "uuid": "56e2c2d3-3b55-4b85-ada1-75c2dfdeb4da", 00:12:01.532 "assigned_rate_limits": { 00:12:01.532 "rw_ios_per_sec": 0, 00:12:01.532 "rw_mbytes_per_sec": 0, 00:12:01.532 "r_mbytes_per_sec": 0, 00:12:01.532 "w_mbytes_per_sec": 0 00:12:01.532 }, 00:12:01.532 "claimed": true, 00:12:01.532 "claim_type": "exclusive_write", 00:12:01.532 "zoned": false, 00:12:01.532 "supported_io_types": { 00:12:01.532 "read": true, 00:12:01.532 "write": true, 00:12:01.532 "unmap": true, 00:12:01.532 "flush": true, 00:12:01.532 "reset": true, 00:12:01.532 "nvme_admin": false, 00:12:01.532 "nvme_io": false, 00:12:01.532 "nvme_io_md": false, 00:12:01.532 "write_zeroes": true, 00:12:01.532 "zcopy": true, 00:12:01.532 "get_zone_info": false, 00:12:01.532 "zone_management": false, 00:12:01.532 "zone_append": false, 00:12:01.532 "compare": false, 00:12:01.532 "compare_and_write": false, 00:12:01.532 "abort": true, 00:12:01.532 "seek_hole": false, 00:12:01.532 "seek_data": false, 00:12:01.532 "copy": true, 00:12:01.532 "nvme_iov_md": false 00:12:01.532 }, 00:12:01.532 "memory_domains": [ 00:12:01.532 { 00:12:01.532 "dma_device_id": "system", 00:12:01.532 "dma_device_type": 1 00:12:01.532 }, 00:12:01.532 { 00:12:01.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.532 "dma_device_type": 2 00:12:01.532 } 00:12:01.532 ], 00:12:01.532 "driver_specific": {} 00:12:01.532 } 00:12:01.532 ] 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.532 "name": "Existed_Raid", 00:12:01.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.532 "strip_size_kb": 64, 00:12:01.532 "state": "configuring", 00:12:01.532 "raid_level": "concat", 00:12:01.532 "superblock": false, 00:12:01.532 "num_base_bdevs": 4, 00:12:01.532 "num_base_bdevs_discovered": 2, 00:12:01.532 "num_base_bdevs_operational": 4, 00:12:01.532 "base_bdevs_list": [ 00:12:01.532 { 00:12:01.532 "name": "BaseBdev1", 00:12:01.532 "uuid": "406ef7e2-74e9-46e6-a3df-79362d142f69", 00:12:01.532 "is_configured": true, 00:12:01.532 "data_offset": 0, 00:12:01.532 "data_size": 65536 00:12:01.532 }, 00:12:01.532 { 00:12:01.532 "name": "BaseBdev2", 00:12:01.532 "uuid": "56e2c2d3-3b55-4b85-ada1-75c2dfdeb4da", 00:12:01.532 "is_configured": true, 00:12:01.532 "data_offset": 0, 00:12:01.532 "data_size": 65536 00:12:01.532 }, 00:12:01.532 { 00:12:01.532 "name": "BaseBdev3", 00:12:01.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.532 "is_configured": false, 00:12:01.532 "data_offset": 0, 00:12:01.532 "data_size": 0 00:12:01.532 }, 00:12:01.532 { 00:12:01.532 "name": "BaseBdev4", 00:12:01.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.532 "is_configured": false, 00:12:01.532 "data_offset": 0, 00:12:01.532 "data_size": 0 00:12:01.532 } 00:12:01.532 ] 00:12:01.532 }' 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.532 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.100 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.101 [2024-11-20 14:29:02.937340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.101 BaseBdev3 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.101 [ 00:12:02.101 { 00:12:02.101 "name": "BaseBdev3", 00:12:02.101 "aliases": [ 00:12:02.101 "35e89d9d-df63-49d8-8575-58986e0dcb57" 00:12:02.101 ], 00:12:02.101 "product_name": "Malloc disk", 00:12:02.101 "block_size": 512, 00:12:02.101 "num_blocks": 65536, 00:12:02.101 "uuid": "35e89d9d-df63-49d8-8575-58986e0dcb57", 00:12:02.101 "assigned_rate_limits": { 00:12:02.101 "rw_ios_per_sec": 0, 00:12:02.101 "rw_mbytes_per_sec": 0, 00:12:02.101 "r_mbytes_per_sec": 0, 00:12:02.101 "w_mbytes_per_sec": 0 00:12:02.101 }, 00:12:02.101 "claimed": true, 00:12:02.101 "claim_type": "exclusive_write", 00:12:02.101 "zoned": false, 00:12:02.101 "supported_io_types": { 00:12:02.101 "read": true, 00:12:02.101 "write": true, 00:12:02.101 "unmap": true, 00:12:02.101 "flush": true, 00:12:02.101 "reset": true, 00:12:02.101 "nvme_admin": false, 00:12:02.101 "nvme_io": false, 00:12:02.101 "nvme_io_md": false, 00:12:02.101 "write_zeroes": true, 00:12:02.101 "zcopy": true, 00:12:02.101 "get_zone_info": false, 00:12:02.101 "zone_management": false, 00:12:02.101 "zone_append": false, 00:12:02.101 "compare": false, 00:12:02.101 "compare_and_write": false, 00:12:02.101 "abort": true, 00:12:02.101 "seek_hole": false, 00:12:02.101 "seek_data": false, 00:12:02.101 "copy": true, 00:12:02.101 "nvme_iov_md": false 00:12:02.101 }, 00:12:02.101 "memory_domains": [ 00:12:02.101 { 00:12:02.101 "dma_device_id": "system", 00:12:02.101 "dma_device_type": 1 00:12:02.101 }, 00:12:02.101 { 00:12:02.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.101 "dma_device_type": 2 00:12:02.101 } 00:12:02.101 ], 00:12:02.101 "driver_specific": {} 00:12:02.101 } 00:12:02.101 ] 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.101 14:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.101 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.101 "name": "Existed_Raid", 00:12:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.101 "strip_size_kb": 64, 00:12:02.101 "state": "configuring", 00:12:02.101 "raid_level": "concat", 00:12:02.101 "superblock": false, 00:12:02.101 "num_base_bdevs": 4, 00:12:02.101 "num_base_bdevs_discovered": 3, 00:12:02.101 "num_base_bdevs_operational": 4, 00:12:02.101 "base_bdevs_list": [ 00:12:02.101 { 00:12:02.101 "name": "BaseBdev1", 00:12:02.101 "uuid": "406ef7e2-74e9-46e6-a3df-79362d142f69", 00:12:02.101 "is_configured": true, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 65536 00:12:02.101 }, 00:12:02.101 { 00:12:02.101 "name": "BaseBdev2", 00:12:02.101 "uuid": "56e2c2d3-3b55-4b85-ada1-75c2dfdeb4da", 00:12:02.101 "is_configured": true, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 65536 00:12:02.101 }, 00:12:02.101 { 00:12:02.101 "name": "BaseBdev3", 00:12:02.101 "uuid": "35e89d9d-df63-49d8-8575-58986e0dcb57", 00:12:02.101 "is_configured": true, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 65536 00:12:02.101 }, 00:12:02.101 { 00:12:02.101 "name": "BaseBdev4", 00:12:02.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.101 "is_configured": false, 00:12:02.101 "data_offset": 0, 00:12:02.101 "data_size": 0 00:12:02.101 } 00:12:02.101 ] 00:12:02.101 }' 00:12:02.101 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.102 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.668 [2024-11-20 14:29:03.521261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:02.668 [2024-11-20 14:29:03.521332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:02.668 [2024-11-20 14:29:03.521346] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:02.668 [2024-11-20 14:29:03.521750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:02.668 [2024-11-20 14:29:03.521989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:02.668 [2024-11-20 14:29:03.522019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:02.668 [2024-11-20 14:29:03.522342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.668 BaseBdev4 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.668 [ 00:12:02.668 { 00:12:02.668 "name": "BaseBdev4", 00:12:02.668 "aliases": [ 00:12:02.668 "02ae5da2-65c8-409a-8e93-17b06b12349f" 00:12:02.668 ], 00:12:02.668 "product_name": "Malloc disk", 00:12:02.668 "block_size": 512, 00:12:02.668 "num_blocks": 65536, 00:12:02.668 "uuid": "02ae5da2-65c8-409a-8e93-17b06b12349f", 00:12:02.668 "assigned_rate_limits": { 00:12:02.668 "rw_ios_per_sec": 0, 00:12:02.668 "rw_mbytes_per_sec": 0, 00:12:02.668 "r_mbytes_per_sec": 0, 00:12:02.668 "w_mbytes_per_sec": 0 00:12:02.668 }, 00:12:02.668 "claimed": true, 00:12:02.668 "claim_type": "exclusive_write", 00:12:02.668 "zoned": false, 00:12:02.668 "supported_io_types": { 00:12:02.668 "read": true, 00:12:02.668 "write": true, 00:12:02.668 "unmap": true, 00:12:02.668 "flush": true, 00:12:02.668 "reset": true, 00:12:02.668 "nvme_admin": false, 00:12:02.668 "nvme_io": false, 00:12:02.668 "nvme_io_md": false, 00:12:02.668 "write_zeroes": true, 00:12:02.668 "zcopy": true, 00:12:02.668 "get_zone_info": false, 00:12:02.668 "zone_management": false, 00:12:02.668 "zone_append": false, 00:12:02.668 "compare": false, 00:12:02.668 "compare_and_write": false, 00:12:02.668 "abort": true, 00:12:02.668 "seek_hole": false, 00:12:02.668 "seek_data": false, 00:12:02.668 "copy": true, 00:12:02.668 "nvme_iov_md": false 00:12:02.668 }, 00:12:02.668 "memory_domains": [ 00:12:02.668 { 00:12:02.668 "dma_device_id": "system", 00:12:02.668 "dma_device_type": 1 00:12:02.668 }, 00:12:02.668 { 00:12:02.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.668 "dma_device_type": 2 00:12:02.668 } 00:12:02.668 ], 00:12:02.668 "driver_specific": {} 00:12:02.668 } 00:12:02.668 ] 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.668 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.669 "name": "Existed_Raid", 00:12:02.669 "uuid": "e4f3866d-1b89-4cf4-b012-fa07be15115a", 00:12:02.669 "strip_size_kb": 64, 00:12:02.669 "state": "online", 00:12:02.669 "raid_level": "concat", 00:12:02.669 "superblock": false, 00:12:02.669 "num_base_bdevs": 4, 00:12:02.669 "num_base_bdevs_discovered": 4, 00:12:02.669 "num_base_bdevs_operational": 4, 00:12:02.669 "base_bdevs_list": [ 00:12:02.669 { 00:12:02.669 "name": "BaseBdev1", 00:12:02.669 "uuid": "406ef7e2-74e9-46e6-a3df-79362d142f69", 00:12:02.669 "is_configured": true, 00:12:02.669 "data_offset": 0, 00:12:02.669 "data_size": 65536 00:12:02.669 }, 00:12:02.669 { 00:12:02.669 "name": "BaseBdev2", 00:12:02.669 "uuid": "56e2c2d3-3b55-4b85-ada1-75c2dfdeb4da", 00:12:02.669 "is_configured": true, 00:12:02.669 "data_offset": 0, 00:12:02.669 "data_size": 65536 00:12:02.669 }, 00:12:02.669 { 00:12:02.669 "name": "BaseBdev3", 00:12:02.669 "uuid": "35e89d9d-df63-49d8-8575-58986e0dcb57", 00:12:02.669 "is_configured": true, 00:12:02.669 "data_offset": 0, 00:12:02.669 "data_size": 65536 00:12:02.669 }, 00:12:02.669 { 00:12:02.669 "name": "BaseBdev4", 00:12:02.669 "uuid": "02ae5da2-65c8-409a-8e93-17b06b12349f", 00:12:02.669 "is_configured": true, 00:12:02.669 "data_offset": 0, 00:12:02.669 "data_size": 65536 00:12:02.669 } 00:12:02.669 ] 00:12:02.669 }' 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.669 14:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.235 [2024-11-20 14:29:04.069959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.235 "name": "Existed_Raid", 00:12:03.235 "aliases": [ 00:12:03.235 "e4f3866d-1b89-4cf4-b012-fa07be15115a" 00:12:03.235 ], 00:12:03.235 "product_name": "Raid Volume", 00:12:03.235 "block_size": 512, 00:12:03.235 "num_blocks": 262144, 00:12:03.235 "uuid": "e4f3866d-1b89-4cf4-b012-fa07be15115a", 00:12:03.235 "assigned_rate_limits": { 00:12:03.235 "rw_ios_per_sec": 0, 00:12:03.235 "rw_mbytes_per_sec": 0, 00:12:03.235 "r_mbytes_per_sec": 0, 00:12:03.235 "w_mbytes_per_sec": 0 00:12:03.235 }, 00:12:03.235 "claimed": false, 00:12:03.235 "zoned": false, 00:12:03.235 "supported_io_types": { 00:12:03.235 "read": true, 00:12:03.235 "write": true, 00:12:03.235 "unmap": true, 00:12:03.235 "flush": true, 00:12:03.235 "reset": true, 00:12:03.235 "nvme_admin": false, 00:12:03.235 "nvme_io": false, 00:12:03.235 "nvme_io_md": false, 00:12:03.235 "write_zeroes": true, 00:12:03.235 "zcopy": false, 00:12:03.235 "get_zone_info": false, 00:12:03.235 "zone_management": false, 00:12:03.235 "zone_append": false, 00:12:03.235 "compare": false, 00:12:03.235 "compare_and_write": false, 00:12:03.235 "abort": false, 00:12:03.235 "seek_hole": false, 00:12:03.235 "seek_data": false, 00:12:03.235 "copy": false, 00:12:03.235 "nvme_iov_md": false 00:12:03.235 }, 00:12:03.235 "memory_domains": [ 00:12:03.235 { 00:12:03.235 "dma_device_id": "system", 00:12:03.235 "dma_device_type": 1 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.235 "dma_device_type": 2 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "dma_device_id": "system", 00:12:03.235 "dma_device_type": 1 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.235 "dma_device_type": 2 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "dma_device_id": "system", 00:12:03.235 "dma_device_type": 1 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.235 "dma_device_type": 2 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "dma_device_id": "system", 00:12:03.235 "dma_device_type": 1 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.235 "dma_device_type": 2 00:12:03.235 } 00:12:03.235 ], 00:12:03.235 "driver_specific": { 00:12:03.235 "raid": { 00:12:03.235 "uuid": "e4f3866d-1b89-4cf4-b012-fa07be15115a", 00:12:03.235 "strip_size_kb": 64, 00:12:03.235 "state": "online", 00:12:03.235 "raid_level": "concat", 00:12:03.235 "superblock": false, 00:12:03.235 "num_base_bdevs": 4, 00:12:03.235 "num_base_bdevs_discovered": 4, 00:12:03.235 "num_base_bdevs_operational": 4, 00:12:03.235 "base_bdevs_list": [ 00:12:03.235 { 00:12:03.235 "name": "BaseBdev1", 00:12:03.235 "uuid": "406ef7e2-74e9-46e6-a3df-79362d142f69", 00:12:03.235 "is_configured": true, 00:12:03.235 "data_offset": 0, 00:12:03.235 "data_size": 65536 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "name": "BaseBdev2", 00:12:03.235 "uuid": "56e2c2d3-3b55-4b85-ada1-75c2dfdeb4da", 00:12:03.235 "is_configured": true, 00:12:03.235 "data_offset": 0, 00:12:03.235 "data_size": 65536 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "name": "BaseBdev3", 00:12:03.235 "uuid": "35e89d9d-df63-49d8-8575-58986e0dcb57", 00:12:03.235 "is_configured": true, 00:12:03.235 "data_offset": 0, 00:12:03.235 "data_size": 65536 00:12:03.235 }, 00:12:03.235 { 00:12:03.235 "name": "BaseBdev4", 00:12:03.235 "uuid": "02ae5da2-65c8-409a-8e93-17b06b12349f", 00:12:03.235 "is_configured": true, 00:12:03.235 "data_offset": 0, 00:12:03.235 "data_size": 65536 00:12:03.235 } 00:12:03.235 ] 00:12:03.235 } 00:12:03.235 } 00:12:03.235 }' 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:03.235 BaseBdev2 00:12:03.235 BaseBdev3 00:12:03.235 BaseBdev4' 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.235 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.236 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.236 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.236 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:03.236 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.236 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.236 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.236 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.495 [2024-11-20 14:29:04.433664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.495 [2024-11-20 14:29:04.433706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.495 [2024-11-20 14:29:04.433781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.495 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.754 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.754 "name": "Existed_Raid", 00:12:03.754 "uuid": "e4f3866d-1b89-4cf4-b012-fa07be15115a", 00:12:03.754 "strip_size_kb": 64, 00:12:03.754 "state": "offline", 00:12:03.754 "raid_level": "concat", 00:12:03.754 "superblock": false, 00:12:03.754 "num_base_bdevs": 4, 00:12:03.754 "num_base_bdevs_discovered": 3, 00:12:03.754 "num_base_bdevs_operational": 3, 00:12:03.754 "base_bdevs_list": [ 00:12:03.754 { 00:12:03.754 "name": null, 00:12:03.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.754 "is_configured": false, 00:12:03.754 "data_offset": 0, 00:12:03.754 "data_size": 65536 00:12:03.754 }, 00:12:03.754 { 00:12:03.754 "name": "BaseBdev2", 00:12:03.754 "uuid": "56e2c2d3-3b55-4b85-ada1-75c2dfdeb4da", 00:12:03.754 "is_configured": true, 00:12:03.754 "data_offset": 0, 00:12:03.754 "data_size": 65536 00:12:03.754 }, 00:12:03.754 { 00:12:03.754 "name": "BaseBdev3", 00:12:03.754 "uuid": "35e89d9d-df63-49d8-8575-58986e0dcb57", 00:12:03.754 "is_configured": true, 00:12:03.754 "data_offset": 0, 00:12:03.754 "data_size": 65536 00:12:03.754 }, 00:12:03.754 { 00:12:03.754 "name": "BaseBdev4", 00:12:03.754 "uuid": "02ae5da2-65c8-409a-8e93-17b06b12349f", 00:12:03.754 "is_configured": true, 00:12:03.754 "data_offset": 0, 00:12:03.754 "data_size": 65536 00:12:03.754 } 00:12:03.754 ] 00:12:03.754 }' 00:12:03.754 14:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.754 14:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.012 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.012 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.012 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.012 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.012 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.012 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.012 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.280 [2024-11-20 14:29:05.087976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.280 [2024-11-20 14:29:05.236151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.280 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.551 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.551 [2024-11-20 14:29:05.379371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:04.551 [2024-11-20 14:29:05.379562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.552 BaseBdev2 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.552 [ 00:12:04.552 { 00:12:04.552 "name": "BaseBdev2", 00:12:04.552 "aliases": [ 00:12:04.552 "87ced052-0e2f-42a5-a5e4-3a488dd32dc9" 00:12:04.552 ], 00:12:04.552 "product_name": "Malloc disk", 00:12:04.552 "block_size": 512, 00:12:04.552 "num_blocks": 65536, 00:12:04.552 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:04.552 "assigned_rate_limits": { 00:12:04.552 "rw_ios_per_sec": 0, 00:12:04.552 "rw_mbytes_per_sec": 0, 00:12:04.552 "r_mbytes_per_sec": 0, 00:12:04.552 "w_mbytes_per_sec": 0 00:12:04.552 }, 00:12:04.552 "claimed": false, 00:12:04.552 "zoned": false, 00:12:04.552 "supported_io_types": { 00:12:04.552 "read": true, 00:12:04.552 "write": true, 00:12:04.552 "unmap": true, 00:12:04.552 "flush": true, 00:12:04.552 "reset": true, 00:12:04.552 "nvme_admin": false, 00:12:04.552 "nvme_io": false, 00:12:04.552 "nvme_io_md": false, 00:12:04.552 "write_zeroes": true, 00:12:04.552 "zcopy": true, 00:12:04.552 "get_zone_info": false, 00:12:04.552 "zone_management": false, 00:12:04.552 "zone_append": false, 00:12:04.552 "compare": false, 00:12:04.552 "compare_and_write": false, 00:12:04.552 "abort": true, 00:12:04.552 "seek_hole": false, 00:12:04.552 "seek_data": false, 00:12:04.552 "copy": true, 00:12:04.552 "nvme_iov_md": false 00:12:04.552 }, 00:12:04.552 "memory_domains": [ 00:12:04.552 { 00:12:04.552 "dma_device_id": "system", 00:12:04.552 "dma_device_type": 1 00:12:04.552 }, 00:12:04.552 { 00:12:04.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.552 "dma_device_type": 2 00:12:04.552 } 00:12:04.552 ], 00:12:04.552 "driver_specific": {} 00:12:04.552 } 00:12:04.552 ] 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.552 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.811 BaseBdev3 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.811 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.811 [ 00:12:04.811 { 00:12:04.811 "name": "BaseBdev3", 00:12:04.811 "aliases": [ 00:12:04.811 "ba815df3-d2a9-4a9b-99f2-bb32741fb58c" 00:12:04.811 ], 00:12:04.811 "product_name": "Malloc disk", 00:12:04.812 "block_size": 512, 00:12:04.812 "num_blocks": 65536, 00:12:04.812 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:04.812 "assigned_rate_limits": { 00:12:04.812 "rw_ios_per_sec": 0, 00:12:04.812 "rw_mbytes_per_sec": 0, 00:12:04.812 "r_mbytes_per_sec": 0, 00:12:04.812 "w_mbytes_per_sec": 0 00:12:04.812 }, 00:12:04.812 "claimed": false, 00:12:04.812 "zoned": false, 00:12:04.812 "supported_io_types": { 00:12:04.812 "read": true, 00:12:04.812 "write": true, 00:12:04.812 "unmap": true, 00:12:04.812 "flush": true, 00:12:04.812 "reset": true, 00:12:04.812 "nvme_admin": false, 00:12:04.812 "nvme_io": false, 00:12:04.812 "nvme_io_md": false, 00:12:04.812 "write_zeroes": true, 00:12:04.812 "zcopy": true, 00:12:04.812 "get_zone_info": false, 00:12:04.812 "zone_management": false, 00:12:04.812 "zone_append": false, 00:12:04.812 "compare": false, 00:12:04.812 "compare_and_write": false, 00:12:04.812 "abort": true, 00:12:04.812 "seek_hole": false, 00:12:04.812 "seek_data": false, 00:12:04.812 "copy": true, 00:12:04.812 "nvme_iov_md": false 00:12:04.812 }, 00:12:04.812 "memory_domains": [ 00:12:04.812 { 00:12:04.812 "dma_device_id": "system", 00:12:04.812 "dma_device_type": 1 00:12:04.812 }, 00:12:04.812 { 00:12:04.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.812 "dma_device_type": 2 00:12:04.812 } 00:12:04.812 ], 00:12:04.812 "driver_specific": {} 00:12:04.812 } 00:12:04.812 ] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.812 BaseBdev4 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.812 [ 00:12:04.812 { 00:12:04.812 "name": "BaseBdev4", 00:12:04.812 "aliases": [ 00:12:04.812 "0b4e292c-b174-46a6-8a92-7df3f73513e1" 00:12:04.812 ], 00:12:04.812 "product_name": "Malloc disk", 00:12:04.812 "block_size": 512, 00:12:04.812 "num_blocks": 65536, 00:12:04.812 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:04.812 "assigned_rate_limits": { 00:12:04.812 "rw_ios_per_sec": 0, 00:12:04.812 "rw_mbytes_per_sec": 0, 00:12:04.812 "r_mbytes_per_sec": 0, 00:12:04.812 "w_mbytes_per_sec": 0 00:12:04.812 }, 00:12:04.812 "claimed": false, 00:12:04.812 "zoned": false, 00:12:04.812 "supported_io_types": { 00:12:04.812 "read": true, 00:12:04.812 "write": true, 00:12:04.812 "unmap": true, 00:12:04.812 "flush": true, 00:12:04.812 "reset": true, 00:12:04.812 "nvme_admin": false, 00:12:04.812 "nvme_io": false, 00:12:04.812 "nvme_io_md": false, 00:12:04.812 "write_zeroes": true, 00:12:04.812 "zcopy": true, 00:12:04.812 "get_zone_info": false, 00:12:04.812 "zone_management": false, 00:12:04.812 "zone_append": false, 00:12:04.812 "compare": false, 00:12:04.812 "compare_and_write": false, 00:12:04.812 "abort": true, 00:12:04.812 "seek_hole": false, 00:12:04.812 "seek_data": false, 00:12:04.812 "copy": true, 00:12:04.812 "nvme_iov_md": false 00:12:04.812 }, 00:12:04.812 "memory_domains": [ 00:12:04.812 { 00:12:04.812 "dma_device_id": "system", 00:12:04.812 "dma_device_type": 1 00:12:04.812 }, 00:12:04.812 { 00:12:04.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.812 "dma_device_type": 2 00:12:04.812 } 00:12:04.812 ], 00:12:04.812 "driver_specific": {} 00:12:04.812 } 00:12:04.812 ] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.812 [2024-11-20 14:29:05.749129] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:04.812 [2024-11-20 14:29:05.749319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:04.812 [2024-11-20 14:29:05.749371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.812 [2024-11-20 14:29:05.751947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.812 [2024-11-20 14:29:05.752023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.812 "name": "Existed_Raid", 00:12:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.812 "strip_size_kb": 64, 00:12:04.812 "state": "configuring", 00:12:04.812 "raid_level": "concat", 00:12:04.812 "superblock": false, 00:12:04.812 "num_base_bdevs": 4, 00:12:04.812 "num_base_bdevs_discovered": 3, 00:12:04.812 "num_base_bdevs_operational": 4, 00:12:04.812 "base_bdevs_list": [ 00:12:04.812 { 00:12:04.812 "name": "BaseBdev1", 00:12:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.812 "is_configured": false, 00:12:04.812 "data_offset": 0, 00:12:04.812 "data_size": 0 00:12:04.812 }, 00:12:04.812 { 00:12:04.812 "name": "BaseBdev2", 00:12:04.812 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:04.812 "is_configured": true, 00:12:04.812 "data_offset": 0, 00:12:04.812 "data_size": 65536 00:12:04.812 }, 00:12:04.812 { 00:12:04.812 "name": "BaseBdev3", 00:12:04.812 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:04.812 "is_configured": true, 00:12:04.812 "data_offset": 0, 00:12:04.812 "data_size": 65536 00:12:04.812 }, 00:12:04.812 { 00:12:04.812 "name": "BaseBdev4", 00:12:04.812 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:04.812 "is_configured": true, 00:12:04.812 "data_offset": 0, 00:12:04.812 "data_size": 65536 00:12:04.812 } 00:12:04.812 ] 00:12:04.812 }' 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.812 14:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.381 [2024-11-20 14:29:06.261288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.381 "name": "Existed_Raid", 00:12:05.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.381 "strip_size_kb": 64, 00:12:05.381 "state": "configuring", 00:12:05.381 "raid_level": "concat", 00:12:05.381 "superblock": false, 00:12:05.381 "num_base_bdevs": 4, 00:12:05.381 "num_base_bdevs_discovered": 2, 00:12:05.381 "num_base_bdevs_operational": 4, 00:12:05.381 "base_bdevs_list": [ 00:12:05.381 { 00:12:05.381 "name": "BaseBdev1", 00:12:05.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.381 "is_configured": false, 00:12:05.381 "data_offset": 0, 00:12:05.381 "data_size": 0 00:12:05.381 }, 00:12:05.381 { 00:12:05.381 "name": null, 00:12:05.381 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:05.381 "is_configured": false, 00:12:05.381 "data_offset": 0, 00:12:05.381 "data_size": 65536 00:12:05.381 }, 00:12:05.381 { 00:12:05.381 "name": "BaseBdev3", 00:12:05.381 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:05.381 "is_configured": true, 00:12:05.381 "data_offset": 0, 00:12:05.381 "data_size": 65536 00:12:05.381 }, 00:12:05.381 { 00:12:05.381 "name": "BaseBdev4", 00:12:05.381 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:05.381 "is_configured": true, 00:12:05.381 "data_offset": 0, 00:12:05.381 "data_size": 65536 00:12:05.381 } 00:12:05.381 ] 00:12:05.381 }' 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.381 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.949 [2024-11-20 14:29:06.900019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.949 BaseBdev1 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.949 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.950 [ 00:12:05.950 { 00:12:05.950 "name": "BaseBdev1", 00:12:05.950 "aliases": [ 00:12:05.950 "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a" 00:12:05.950 ], 00:12:05.950 "product_name": "Malloc disk", 00:12:05.950 "block_size": 512, 00:12:05.950 "num_blocks": 65536, 00:12:05.950 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:05.950 "assigned_rate_limits": { 00:12:05.950 "rw_ios_per_sec": 0, 00:12:05.950 "rw_mbytes_per_sec": 0, 00:12:05.950 "r_mbytes_per_sec": 0, 00:12:05.950 "w_mbytes_per_sec": 0 00:12:05.950 }, 00:12:05.950 "claimed": true, 00:12:05.950 "claim_type": "exclusive_write", 00:12:05.950 "zoned": false, 00:12:05.950 "supported_io_types": { 00:12:05.950 "read": true, 00:12:05.950 "write": true, 00:12:05.950 "unmap": true, 00:12:05.950 "flush": true, 00:12:05.950 "reset": true, 00:12:05.950 "nvme_admin": false, 00:12:05.950 "nvme_io": false, 00:12:05.950 "nvme_io_md": false, 00:12:05.950 "write_zeroes": true, 00:12:05.950 "zcopy": true, 00:12:05.950 "get_zone_info": false, 00:12:05.950 "zone_management": false, 00:12:05.950 "zone_append": false, 00:12:05.950 "compare": false, 00:12:05.950 "compare_and_write": false, 00:12:05.950 "abort": true, 00:12:05.950 "seek_hole": false, 00:12:05.950 "seek_data": false, 00:12:05.950 "copy": true, 00:12:05.950 "nvme_iov_md": false 00:12:05.950 }, 00:12:05.950 "memory_domains": [ 00:12:05.950 { 00:12:05.950 "dma_device_id": "system", 00:12:05.950 "dma_device_type": 1 00:12:05.950 }, 00:12:05.950 { 00:12:05.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.950 "dma_device_type": 2 00:12:05.950 } 00:12:05.950 ], 00:12:05.950 "driver_specific": {} 00:12:05.950 } 00:12:05.950 ] 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.950 "name": "Existed_Raid", 00:12:05.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.950 "strip_size_kb": 64, 00:12:05.950 "state": "configuring", 00:12:05.950 "raid_level": "concat", 00:12:05.950 "superblock": false, 00:12:05.950 "num_base_bdevs": 4, 00:12:05.950 "num_base_bdevs_discovered": 3, 00:12:05.950 "num_base_bdevs_operational": 4, 00:12:05.950 "base_bdevs_list": [ 00:12:05.950 { 00:12:05.950 "name": "BaseBdev1", 00:12:05.950 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:05.950 "is_configured": true, 00:12:05.950 "data_offset": 0, 00:12:05.950 "data_size": 65536 00:12:05.950 }, 00:12:05.950 { 00:12:05.950 "name": null, 00:12:05.950 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:05.950 "is_configured": false, 00:12:05.950 "data_offset": 0, 00:12:05.950 "data_size": 65536 00:12:05.950 }, 00:12:05.950 { 00:12:05.950 "name": "BaseBdev3", 00:12:05.950 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:05.950 "is_configured": true, 00:12:05.950 "data_offset": 0, 00:12:05.950 "data_size": 65536 00:12:05.950 }, 00:12:05.950 { 00:12:05.950 "name": "BaseBdev4", 00:12:05.950 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:05.950 "is_configured": true, 00:12:05.950 "data_offset": 0, 00:12:05.950 "data_size": 65536 00:12:05.950 } 00:12:05.950 ] 00:12:05.950 }' 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.950 14:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.517 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.518 [2024-11-20 14:29:07.500287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.518 "name": "Existed_Raid", 00:12:06.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.518 "strip_size_kb": 64, 00:12:06.518 "state": "configuring", 00:12:06.518 "raid_level": "concat", 00:12:06.518 "superblock": false, 00:12:06.518 "num_base_bdevs": 4, 00:12:06.518 "num_base_bdevs_discovered": 2, 00:12:06.518 "num_base_bdevs_operational": 4, 00:12:06.518 "base_bdevs_list": [ 00:12:06.518 { 00:12:06.518 "name": "BaseBdev1", 00:12:06.518 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:06.518 "is_configured": true, 00:12:06.518 "data_offset": 0, 00:12:06.518 "data_size": 65536 00:12:06.518 }, 00:12:06.518 { 00:12:06.518 "name": null, 00:12:06.518 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:06.518 "is_configured": false, 00:12:06.518 "data_offset": 0, 00:12:06.518 "data_size": 65536 00:12:06.518 }, 00:12:06.518 { 00:12:06.518 "name": null, 00:12:06.518 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:06.518 "is_configured": false, 00:12:06.518 "data_offset": 0, 00:12:06.518 "data_size": 65536 00:12:06.518 }, 00:12:06.518 { 00:12:06.518 "name": "BaseBdev4", 00:12:06.518 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:06.518 "is_configured": true, 00:12:06.518 "data_offset": 0, 00:12:06.518 "data_size": 65536 00:12:06.518 } 00:12:06.518 ] 00:12:06.518 }' 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.518 14:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.084 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.085 [2024-11-20 14:29:08.096412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.085 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.343 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.343 "name": "Existed_Raid", 00:12:07.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.343 "strip_size_kb": 64, 00:12:07.343 "state": "configuring", 00:12:07.343 "raid_level": "concat", 00:12:07.343 "superblock": false, 00:12:07.343 "num_base_bdevs": 4, 00:12:07.343 "num_base_bdevs_discovered": 3, 00:12:07.344 "num_base_bdevs_operational": 4, 00:12:07.344 "base_bdevs_list": [ 00:12:07.344 { 00:12:07.344 "name": "BaseBdev1", 00:12:07.344 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:07.344 "is_configured": true, 00:12:07.344 "data_offset": 0, 00:12:07.344 "data_size": 65536 00:12:07.344 }, 00:12:07.344 { 00:12:07.344 "name": null, 00:12:07.344 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:07.344 "is_configured": false, 00:12:07.344 "data_offset": 0, 00:12:07.344 "data_size": 65536 00:12:07.344 }, 00:12:07.344 { 00:12:07.344 "name": "BaseBdev3", 00:12:07.344 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:07.344 "is_configured": true, 00:12:07.344 "data_offset": 0, 00:12:07.344 "data_size": 65536 00:12:07.344 }, 00:12:07.344 { 00:12:07.344 "name": "BaseBdev4", 00:12:07.344 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:07.344 "is_configured": true, 00:12:07.344 "data_offset": 0, 00:12:07.344 "data_size": 65536 00:12:07.344 } 00:12:07.344 ] 00:12:07.344 }' 00:12:07.344 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.344 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.602 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.602 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.603 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.603 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.861 [2024-11-20 14:29:08.700676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.861 "name": "Existed_Raid", 00:12:07.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.861 "strip_size_kb": 64, 00:12:07.861 "state": "configuring", 00:12:07.861 "raid_level": "concat", 00:12:07.861 "superblock": false, 00:12:07.861 "num_base_bdevs": 4, 00:12:07.861 "num_base_bdevs_discovered": 2, 00:12:07.861 "num_base_bdevs_operational": 4, 00:12:07.861 "base_bdevs_list": [ 00:12:07.861 { 00:12:07.861 "name": null, 00:12:07.861 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:07.861 "is_configured": false, 00:12:07.861 "data_offset": 0, 00:12:07.861 "data_size": 65536 00:12:07.861 }, 00:12:07.861 { 00:12:07.861 "name": null, 00:12:07.861 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:07.861 "is_configured": false, 00:12:07.861 "data_offset": 0, 00:12:07.861 "data_size": 65536 00:12:07.861 }, 00:12:07.861 { 00:12:07.861 "name": "BaseBdev3", 00:12:07.861 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:07.861 "is_configured": true, 00:12:07.861 "data_offset": 0, 00:12:07.861 "data_size": 65536 00:12:07.861 }, 00:12:07.861 { 00:12:07.861 "name": "BaseBdev4", 00:12:07.861 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:07.861 "is_configured": true, 00:12:07.861 "data_offset": 0, 00:12:07.861 "data_size": 65536 00:12:07.861 } 00:12:07.861 ] 00:12:07.861 }' 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.861 14:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.427 [2024-11-20 14:29:09.322233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.427 "name": "Existed_Raid", 00:12:08.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.427 "strip_size_kb": 64, 00:12:08.427 "state": "configuring", 00:12:08.427 "raid_level": "concat", 00:12:08.427 "superblock": false, 00:12:08.427 "num_base_bdevs": 4, 00:12:08.427 "num_base_bdevs_discovered": 3, 00:12:08.427 "num_base_bdevs_operational": 4, 00:12:08.427 "base_bdevs_list": [ 00:12:08.427 { 00:12:08.427 "name": null, 00:12:08.427 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:08.427 "is_configured": false, 00:12:08.427 "data_offset": 0, 00:12:08.427 "data_size": 65536 00:12:08.427 }, 00:12:08.427 { 00:12:08.427 "name": "BaseBdev2", 00:12:08.427 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:08.427 "is_configured": true, 00:12:08.427 "data_offset": 0, 00:12:08.427 "data_size": 65536 00:12:08.427 }, 00:12:08.427 { 00:12:08.427 "name": "BaseBdev3", 00:12:08.427 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:08.427 "is_configured": true, 00:12:08.427 "data_offset": 0, 00:12:08.427 "data_size": 65536 00:12:08.427 }, 00:12:08.427 { 00:12:08.427 "name": "BaseBdev4", 00:12:08.427 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:08.427 "is_configured": true, 00:12:08.427 "data_offset": 0, 00:12:08.427 "data_size": 65536 00:12:08.427 } 00:12:08.427 ] 00:12:08.427 }' 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.427 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 89d632a6-18e8-4ccc-ad8b-0f74dff70f6a 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 [2024-11-20 14:29:09.966108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:08.995 [2024-11-20 14:29:09.966179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:08.995 [2024-11-20 14:29:09.966193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:08.995 [2024-11-20 14:29:09.966540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:08.995 [2024-11-20 14:29:09.966753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:08.995 [2024-11-20 14:29:09.966777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:08.995 [2024-11-20 14:29:09.967091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.995 NewBaseBdev 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.995 [ 00:12:08.995 { 00:12:08.995 "name": "NewBaseBdev", 00:12:08.995 "aliases": [ 00:12:08.995 "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a" 00:12:08.995 ], 00:12:08.995 "product_name": "Malloc disk", 00:12:08.995 "block_size": 512, 00:12:08.995 "num_blocks": 65536, 00:12:08.995 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:08.995 "assigned_rate_limits": { 00:12:08.995 "rw_ios_per_sec": 0, 00:12:08.995 "rw_mbytes_per_sec": 0, 00:12:08.995 "r_mbytes_per_sec": 0, 00:12:08.995 "w_mbytes_per_sec": 0 00:12:08.995 }, 00:12:08.995 "claimed": true, 00:12:08.995 "claim_type": "exclusive_write", 00:12:08.995 "zoned": false, 00:12:08.995 "supported_io_types": { 00:12:08.995 "read": true, 00:12:08.995 "write": true, 00:12:08.995 "unmap": true, 00:12:08.995 "flush": true, 00:12:08.995 "reset": true, 00:12:08.995 "nvme_admin": false, 00:12:08.995 "nvme_io": false, 00:12:08.995 "nvme_io_md": false, 00:12:08.995 "write_zeroes": true, 00:12:08.995 "zcopy": true, 00:12:08.995 "get_zone_info": false, 00:12:08.995 "zone_management": false, 00:12:08.995 "zone_append": false, 00:12:08.995 "compare": false, 00:12:08.995 "compare_and_write": false, 00:12:08.995 "abort": true, 00:12:08.995 "seek_hole": false, 00:12:08.995 "seek_data": false, 00:12:08.995 "copy": true, 00:12:08.995 "nvme_iov_md": false 00:12:08.995 }, 00:12:08.995 "memory_domains": [ 00:12:08.995 { 00:12:08.995 "dma_device_id": "system", 00:12:08.995 "dma_device_type": 1 00:12:08.995 }, 00:12:08.995 { 00:12:08.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.995 "dma_device_type": 2 00:12:08.995 } 00:12:08.995 ], 00:12:08.995 "driver_specific": {} 00:12:08.995 } 00:12:08.995 ] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.995 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.996 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.996 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.996 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.996 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.996 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.996 14:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.996 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.996 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.996 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.996 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.996 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.254 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.254 "name": "Existed_Raid", 00:12:09.254 "uuid": "911dc093-cfe2-4f9b-89c7-2a346fd1fa1b", 00:12:09.254 "strip_size_kb": 64, 00:12:09.254 "state": "online", 00:12:09.254 "raid_level": "concat", 00:12:09.254 "superblock": false, 00:12:09.254 "num_base_bdevs": 4, 00:12:09.254 "num_base_bdevs_discovered": 4, 00:12:09.254 "num_base_bdevs_operational": 4, 00:12:09.254 "base_bdevs_list": [ 00:12:09.254 { 00:12:09.254 "name": "NewBaseBdev", 00:12:09.254 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:09.254 "is_configured": true, 00:12:09.254 "data_offset": 0, 00:12:09.254 "data_size": 65536 00:12:09.254 }, 00:12:09.254 { 00:12:09.254 "name": "BaseBdev2", 00:12:09.254 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:09.254 "is_configured": true, 00:12:09.254 "data_offset": 0, 00:12:09.254 "data_size": 65536 00:12:09.254 }, 00:12:09.254 { 00:12:09.254 "name": "BaseBdev3", 00:12:09.254 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:09.254 "is_configured": true, 00:12:09.254 "data_offset": 0, 00:12:09.254 "data_size": 65536 00:12:09.254 }, 00:12:09.254 { 00:12:09.254 "name": "BaseBdev4", 00:12:09.254 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:09.254 "is_configured": true, 00:12:09.254 "data_offset": 0, 00:12:09.254 "data_size": 65536 00:12:09.254 } 00:12:09.254 ] 00:12:09.254 }' 00:12:09.254 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.254 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.512 [2024-11-20 14:29:10.502784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.512 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.512 "name": "Existed_Raid", 00:12:09.512 "aliases": [ 00:12:09.512 "911dc093-cfe2-4f9b-89c7-2a346fd1fa1b" 00:12:09.512 ], 00:12:09.512 "product_name": "Raid Volume", 00:12:09.512 "block_size": 512, 00:12:09.512 "num_blocks": 262144, 00:12:09.512 "uuid": "911dc093-cfe2-4f9b-89c7-2a346fd1fa1b", 00:12:09.512 "assigned_rate_limits": { 00:12:09.512 "rw_ios_per_sec": 0, 00:12:09.512 "rw_mbytes_per_sec": 0, 00:12:09.512 "r_mbytes_per_sec": 0, 00:12:09.512 "w_mbytes_per_sec": 0 00:12:09.512 }, 00:12:09.512 "claimed": false, 00:12:09.512 "zoned": false, 00:12:09.512 "supported_io_types": { 00:12:09.512 "read": true, 00:12:09.512 "write": true, 00:12:09.512 "unmap": true, 00:12:09.512 "flush": true, 00:12:09.512 "reset": true, 00:12:09.512 "nvme_admin": false, 00:12:09.512 "nvme_io": false, 00:12:09.512 "nvme_io_md": false, 00:12:09.512 "write_zeroes": true, 00:12:09.512 "zcopy": false, 00:12:09.512 "get_zone_info": false, 00:12:09.512 "zone_management": false, 00:12:09.512 "zone_append": false, 00:12:09.512 "compare": false, 00:12:09.512 "compare_and_write": false, 00:12:09.512 "abort": false, 00:12:09.512 "seek_hole": false, 00:12:09.512 "seek_data": false, 00:12:09.512 "copy": false, 00:12:09.512 "nvme_iov_md": false 00:12:09.512 }, 00:12:09.512 "memory_domains": [ 00:12:09.512 { 00:12:09.512 "dma_device_id": "system", 00:12:09.512 "dma_device_type": 1 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.512 "dma_device_type": 2 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "dma_device_id": "system", 00:12:09.512 "dma_device_type": 1 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.512 "dma_device_type": 2 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "dma_device_id": "system", 00:12:09.512 "dma_device_type": 1 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.512 "dma_device_type": 2 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "dma_device_id": "system", 00:12:09.512 "dma_device_type": 1 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.512 "dma_device_type": 2 00:12:09.512 } 00:12:09.512 ], 00:12:09.512 "driver_specific": { 00:12:09.512 "raid": { 00:12:09.512 "uuid": "911dc093-cfe2-4f9b-89c7-2a346fd1fa1b", 00:12:09.512 "strip_size_kb": 64, 00:12:09.512 "state": "online", 00:12:09.512 "raid_level": "concat", 00:12:09.513 "superblock": false, 00:12:09.513 "num_base_bdevs": 4, 00:12:09.513 "num_base_bdevs_discovered": 4, 00:12:09.513 "num_base_bdevs_operational": 4, 00:12:09.513 "base_bdevs_list": [ 00:12:09.513 { 00:12:09.513 "name": "NewBaseBdev", 00:12:09.513 "uuid": "89d632a6-18e8-4ccc-ad8b-0f74dff70f6a", 00:12:09.513 "is_configured": true, 00:12:09.513 "data_offset": 0, 00:12:09.513 "data_size": 65536 00:12:09.513 }, 00:12:09.513 { 00:12:09.513 "name": "BaseBdev2", 00:12:09.513 "uuid": "87ced052-0e2f-42a5-a5e4-3a488dd32dc9", 00:12:09.513 "is_configured": true, 00:12:09.513 "data_offset": 0, 00:12:09.513 "data_size": 65536 00:12:09.513 }, 00:12:09.513 { 00:12:09.513 "name": "BaseBdev3", 00:12:09.513 "uuid": "ba815df3-d2a9-4a9b-99f2-bb32741fb58c", 00:12:09.513 "is_configured": true, 00:12:09.513 "data_offset": 0, 00:12:09.513 "data_size": 65536 00:12:09.513 }, 00:12:09.513 { 00:12:09.513 "name": "BaseBdev4", 00:12:09.513 "uuid": "0b4e292c-b174-46a6-8a92-7df3f73513e1", 00:12:09.513 "is_configured": true, 00:12:09.513 "data_offset": 0, 00:12:09.513 "data_size": 65536 00:12:09.513 } 00:12:09.513 ] 00:12:09.513 } 00:12:09.513 } 00:12:09.513 }' 00:12:09.513 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:09.771 BaseBdev2 00:12:09.771 BaseBdev3 00:12:09.771 BaseBdev4' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.771 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.031 [2024-11-20 14:29:10.866414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.031 [2024-11-20 14:29:10.866580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.031 [2024-11-20 14:29:10.866733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.031 [2024-11-20 14:29:10.866839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.031 [2024-11-20 14:29:10.866858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71478 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71478 ']' 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71478 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71478 00:12:10.031 killing process with pid 71478 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71478' 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71478 00:12:10.031 [2024-11-20 14:29:10.903696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.031 14:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71478 00:12:10.290 [2024-11-20 14:29:11.264309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:11.665 00:12:11.665 real 0m12.759s 00:12:11.665 user 0m21.024s 00:12:11.665 sys 0m1.898s 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.665 ************************************ 00:12:11.665 END TEST raid_state_function_test 00:12:11.665 ************************************ 00:12:11.665 14:29:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:11.665 14:29:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.665 14:29:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.665 14:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.665 ************************************ 00:12:11.665 START TEST raid_state_function_test_sb 00:12:11.665 ************************************ 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72160 00:12:11.665 Process raid pid: 72160 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72160' 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72160 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72160 ']' 00:12:11.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.665 14:29:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.665 [2024-11-20 14:29:12.503798] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:12:11.665 [2024-11-20 14:29:12.504226] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.665 [2024-11-20 14:29:12.686814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.954 [2024-11-20 14:29:12.848405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.213 [2024-11-20 14:29:13.076548] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.213 [2024-11-20 14:29:13.076608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.479 [2024-11-20 14:29:13.507243] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.479 [2024-11-20 14:29:13.507319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.479 [2024-11-20 14:29:13.507339] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.479 [2024-11-20 14:29:13.507357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.479 [2024-11-20 14:29:13.507368] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.479 [2024-11-20 14:29:13.507384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.479 [2024-11-20 14:29:13.507394] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.479 [2024-11-20 14:29:13.507409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.479 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.737 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.737 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.737 "name": "Existed_Raid", 00:12:12.737 "uuid": "8eb7d4d0-322f-4609-8219-6af80a5d8c67", 00:12:12.737 "strip_size_kb": 64, 00:12:12.737 "state": "configuring", 00:12:12.737 "raid_level": "concat", 00:12:12.737 "superblock": true, 00:12:12.737 "num_base_bdevs": 4, 00:12:12.737 "num_base_bdevs_discovered": 0, 00:12:12.737 "num_base_bdevs_operational": 4, 00:12:12.737 "base_bdevs_list": [ 00:12:12.737 { 00:12:12.737 "name": "BaseBdev1", 00:12:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.737 "is_configured": false, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 0 00:12:12.737 }, 00:12:12.737 { 00:12:12.737 "name": "BaseBdev2", 00:12:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.737 "is_configured": false, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 0 00:12:12.737 }, 00:12:12.737 { 00:12:12.737 "name": "BaseBdev3", 00:12:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.737 "is_configured": false, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 0 00:12:12.737 }, 00:12:12.737 { 00:12:12.737 "name": "BaseBdev4", 00:12:12.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.737 "is_configured": false, 00:12:12.737 "data_offset": 0, 00:12:12.737 "data_size": 0 00:12:12.737 } 00:12:12.737 ] 00:12:12.737 }' 00:12:12.737 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.737 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.996 [2024-11-20 14:29:13.979303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.996 [2024-11-20 14:29:13.979357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.996 [2024-11-20 14:29:13.987292] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.996 [2024-11-20 14:29:13.987349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.996 [2024-11-20 14:29:13.987367] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.996 [2024-11-20 14:29:13.987385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.996 [2024-11-20 14:29:13.987395] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.996 [2024-11-20 14:29:13.987410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.996 [2024-11-20 14:29:13.987420] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:12.996 [2024-11-20 14:29:13.987435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.996 14:29:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.996 [2024-11-20 14:29:14.033069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.996 BaseBdev1 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.996 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.255 [ 00:12:13.255 { 00:12:13.255 "name": "BaseBdev1", 00:12:13.255 "aliases": [ 00:12:13.255 "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae" 00:12:13.255 ], 00:12:13.255 "product_name": "Malloc disk", 00:12:13.255 "block_size": 512, 00:12:13.255 "num_blocks": 65536, 00:12:13.255 "uuid": "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae", 00:12:13.255 "assigned_rate_limits": { 00:12:13.255 "rw_ios_per_sec": 0, 00:12:13.255 "rw_mbytes_per_sec": 0, 00:12:13.255 "r_mbytes_per_sec": 0, 00:12:13.255 "w_mbytes_per_sec": 0 00:12:13.255 }, 00:12:13.255 "claimed": true, 00:12:13.255 "claim_type": "exclusive_write", 00:12:13.255 "zoned": false, 00:12:13.255 "supported_io_types": { 00:12:13.255 "read": true, 00:12:13.255 "write": true, 00:12:13.255 "unmap": true, 00:12:13.255 "flush": true, 00:12:13.255 "reset": true, 00:12:13.255 "nvme_admin": false, 00:12:13.255 "nvme_io": false, 00:12:13.255 "nvme_io_md": false, 00:12:13.255 "write_zeroes": true, 00:12:13.255 "zcopy": true, 00:12:13.255 "get_zone_info": false, 00:12:13.255 "zone_management": false, 00:12:13.255 "zone_append": false, 00:12:13.255 "compare": false, 00:12:13.255 "compare_and_write": false, 00:12:13.255 "abort": true, 00:12:13.255 "seek_hole": false, 00:12:13.255 "seek_data": false, 00:12:13.255 "copy": true, 00:12:13.255 "nvme_iov_md": false 00:12:13.255 }, 00:12:13.255 "memory_domains": [ 00:12:13.255 { 00:12:13.255 "dma_device_id": "system", 00:12:13.255 "dma_device_type": 1 00:12:13.255 }, 00:12:13.255 { 00:12:13.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.255 "dma_device_type": 2 00:12:13.255 } 00:12:13.255 ], 00:12:13.255 "driver_specific": {} 00:12:13.255 } 00:12:13.255 ] 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.255 "name": "Existed_Raid", 00:12:13.255 "uuid": "365f263f-52a9-4a4a-a7f5-74a5a54e1fcb", 00:12:13.255 "strip_size_kb": 64, 00:12:13.255 "state": "configuring", 00:12:13.255 "raid_level": "concat", 00:12:13.255 "superblock": true, 00:12:13.255 "num_base_bdevs": 4, 00:12:13.255 "num_base_bdevs_discovered": 1, 00:12:13.255 "num_base_bdevs_operational": 4, 00:12:13.255 "base_bdevs_list": [ 00:12:13.255 { 00:12:13.255 "name": "BaseBdev1", 00:12:13.255 "uuid": "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae", 00:12:13.255 "is_configured": true, 00:12:13.255 "data_offset": 2048, 00:12:13.255 "data_size": 63488 00:12:13.255 }, 00:12:13.255 { 00:12:13.255 "name": "BaseBdev2", 00:12:13.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.255 "is_configured": false, 00:12:13.255 "data_offset": 0, 00:12:13.255 "data_size": 0 00:12:13.255 }, 00:12:13.255 { 00:12:13.255 "name": "BaseBdev3", 00:12:13.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.255 "is_configured": false, 00:12:13.255 "data_offset": 0, 00:12:13.255 "data_size": 0 00:12:13.255 }, 00:12:13.255 { 00:12:13.255 "name": "BaseBdev4", 00:12:13.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.255 "is_configured": false, 00:12:13.255 "data_offset": 0, 00:12:13.255 "data_size": 0 00:12:13.255 } 00:12:13.255 ] 00:12:13.255 }' 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.255 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.822 [2024-11-20 14:29:14.609266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.822 [2024-11-20 14:29:14.609338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.822 [2024-11-20 14:29:14.617344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.822 [2024-11-20 14:29:14.619970] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.822 [2024-11-20 14:29:14.620029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.822 [2024-11-20 14:29:14.620047] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.822 [2024-11-20 14:29:14.620066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.822 [2024-11-20 14:29:14.620077] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:13.822 [2024-11-20 14:29:14.620092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.822 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.823 "name": "Existed_Raid", 00:12:13.823 "uuid": "fddaf08a-d1de-482a-8e78-6f99b17c3c3a", 00:12:13.823 "strip_size_kb": 64, 00:12:13.823 "state": "configuring", 00:12:13.823 "raid_level": "concat", 00:12:13.823 "superblock": true, 00:12:13.823 "num_base_bdevs": 4, 00:12:13.823 "num_base_bdevs_discovered": 1, 00:12:13.823 "num_base_bdevs_operational": 4, 00:12:13.823 "base_bdevs_list": [ 00:12:13.823 { 00:12:13.823 "name": "BaseBdev1", 00:12:13.823 "uuid": "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae", 00:12:13.823 "is_configured": true, 00:12:13.823 "data_offset": 2048, 00:12:13.823 "data_size": 63488 00:12:13.823 }, 00:12:13.823 { 00:12:13.823 "name": "BaseBdev2", 00:12:13.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.823 "is_configured": false, 00:12:13.823 "data_offset": 0, 00:12:13.823 "data_size": 0 00:12:13.823 }, 00:12:13.823 { 00:12:13.823 "name": "BaseBdev3", 00:12:13.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.823 "is_configured": false, 00:12:13.823 "data_offset": 0, 00:12:13.823 "data_size": 0 00:12:13.823 }, 00:12:13.823 { 00:12:13.823 "name": "BaseBdev4", 00:12:13.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.823 "is_configured": false, 00:12:13.823 "data_offset": 0, 00:12:13.823 "data_size": 0 00:12:13.823 } 00:12:13.823 ] 00:12:13.823 }' 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.823 14:29:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.081 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.081 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.081 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 [2024-11-20 14:29:15.172754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.340 BaseBdev2 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 [ 00:12:14.340 { 00:12:14.340 "name": "BaseBdev2", 00:12:14.340 "aliases": [ 00:12:14.340 "596b6fa8-4bcc-409c-bea4-8875be22659c" 00:12:14.340 ], 00:12:14.340 "product_name": "Malloc disk", 00:12:14.340 "block_size": 512, 00:12:14.340 "num_blocks": 65536, 00:12:14.340 "uuid": "596b6fa8-4bcc-409c-bea4-8875be22659c", 00:12:14.340 "assigned_rate_limits": { 00:12:14.340 "rw_ios_per_sec": 0, 00:12:14.340 "rw_mbytes_per_sec": 0, 00:12:14.340 "r_mbytes_per_sec": 0, 00:12:14.340 "w_mbytes_per_sec": 0 00:12:14.340 }, 00:12:14.340 "claimed": true, 00:12:14.340 "claim_type": "exclusive_write", 00:12:14.340 "zoned": false, 00:12:14.340 "supported_io_types": { 00:12:14.340 "read": true, 00:12:14.340 "write": true, 00:12:14.340 "unmap": true, 00:12:14.340 "flush": true, 00:12:14.340 "reset": true, 00:12:14.340 "nvme_admin": false, 00:12:14.340 "nvme_io": false, 00:12:14.340 "nvme_io_md": false, 00:12:14.340 "write_zeroes": true, 00:12:14.340 "zcopy": true, 00:12:14.340 "get_zone_info": false, 00:12:14.340 "zone_management": false, 00:12:14.340 "zone_append": false, 00:12:14.340 "compare": false, 00:12:14.340 "compare_and_write": false, 00:12:14.340 "abort": true, 00:12:14.340 "seek_hole": false, 00:12:14.340 "seek_data": false, 00:12:14.340 "copy": true, 00:12:14.340 "nvme_iov_md": false 00:12:14.340 }, 00:12:14.340 "memory_domains": [ 00:12:14.340 { 00:12:14.340 "dma_device_id": "system", 00:12:14.340 "dma_device_type": 1 00:12:14.340 }, 00:12:14.340 { 00:12:14.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.340 "dma_device_type": 2 00:12:14.340 } 00:12:14.340 ], 00:12:14.340 "driver_specific": {} 00:12:14.340 } 00:12:14.340 ] 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.340 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.341 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.341 "name": "Existed_Raid", 00:12:14.341 "uuid": "fddaf08a-d1de-482a-8e78-6f99b17c3c3a", 00:12:14.341 "strip_size_kb": 64, 00:12:14.341 "state": "configuring", 00:12:14.341 "raid_level": "concat", 00:12:14.341 "superblock": true, 00:12:14.341 "num_base_bdevs": 4, 00:12:14.341 "num_base_bdevs_discovered": 2, 00:12:14.341 "num_base_bdevs_operational": 4, 00:12:14.341 "base_bdevs_list": [ 00:12:14.341 { 00:12:14.341 "name": "BaseBdev1", 00:12:14.341 "uuid": "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae", 00:12:14.341 "is_configured": true, 00:12:14.341 "data_offset": 2048, 00:12:14.341 "data_size": 63488 00:12:14.341 }, 00:12:14.341 { 00:12:14.341 "name": "BaseBdev2", 00:12:14.341 "uuid": "596b6fa8-4bcc-409c-bea4-8875be22659c", 00:12:14.341 "is_configured": true, 00:12:14.341 "data_offset": 2048, 00:12:14.341 "data_size": 63488 00:12:14.341 }, 00:12:14.341 { 00:12:14.341 "name": "BaseBdev3", 00:12:14.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.341 "is_configured": false, 00:12:14.341 "data_offset": 0, 00:12:14.341 "data_size": 0 00:12:14.341 }, 00:12:14.341 { 00:12:14.341 "name": "BaseBdev4", 00:12:14.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.341 "is_configured": false, 00:12:14.341 "data_offset": 0, 00:12:14.341 "data_size": 0 00:12:14.341 } 00:12:14.341 ] 00:12:14.341 }' 00:12:14.341 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.341 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.908 [2024-11-20 14:29:15.782240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.908 BaseBdev3 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.908 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.909 [ 00:12:14.909 { 00:12:14.909 "name": "BaseBdev3", 00:12:14.909 "aliases": [ 00:12:14.909 "d099c8b4-7f4d-4817-859c-2a1de49db313" 00:12:14.909 ], 00:12:14.909 "product_name": "Malloc disk", 00:12:14.909 "block_size": 512, 00:12:14.909 "num_blocks": 65536, 00:12:14.909 "uuid": "d099c8b4-7f4d-4817-859c-2a1de49db313", 00:12:14.909 "assigned_rate_limits": { 00:12:14.909 "rw_ios_per_sec": 0, 00:12:14.909 "rw_mbytes_per_sec": 0, 00:12:14.909 "r_mbytes_per_sec": 0, 00:12:14.909 "w_mbytes_per_sec": 0 00:12:14.909 }, 00:12:14.909 "claimed": true, 00:12:14.909 "claim_type": "exclusive_write", 00:12:14.909 "zoned": false, 00:12:14.909 "supported_io_types": { 00:12:14.909 "read": true, 00:12:14.909 "write": true, 00:12:14.909 "unmap": true, 00:12:14.909 "flush": true, 00:12:14.909 "reset": true, 00:12:14.909 "nvme_admin": false, 00:12:14.909 "nvme_io": false, 00:12:14.909 "nvme_io_md": false, 00:12:14.909 "write_zeroes": true, 00:12:14.909 "zcopy": true, 00:12:14.909 "get_zone_info": false, 00:12:14.909 "zone_management": false, 00:12:14.909 "zone_append": false, 00:12:14.909 "compare": false, 00:12:14.909 "compare_and_write": false, 00:12:14.909 "abort": true, 00:12:14.909 "seek_hole": false, 00:12:14.909 "seek_data": false, 00:12:14.909 "copy": true, 00:12:14.909 "nvme_iov_md": false 00:12:14.909 }, 00:12:14.909 "memory_domains": [ 00:12:14.909 { 00:12:14.909 "dma_device_id": "system", 00:12:14.909 "dma_device_type": 1 00:12:14.909 }, 00:12:14.909 { 00:12:14.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.909 "dma_device_type": 2 00:12:14.909 } 00:12:14.909 ], 00:12:14.909 "driver_specific": {} 00:12:14.909 } 00:12:14.909 ] 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.909 "name": "Existed_Raid", 00:12:14.909 "uuid": "fddaf08a-d1de-482a-8e78-6f99b17c3c3a", 00:12:14.909 "strip_size_kb": 64, 00:12:14.909 "state": "configuring", 00:12:14.909 "raid_level": "concat", 00:12:14.909 "superblock": true, 00:12:14.909 "num_base_bdevs": 4, 00:12:14.909 "num_base_bdevs_discovered": 3, 00:12:14.909 "num_base_bdevs_operational": 4, 00:12:14.909 "base_bdevs_list": [ 00:12:14.909 { 00:12:14.909 "name": "BaseBdev1", 00:12:14.909 "uuid": "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae", 00:12:14.909 "is_configured": true, 00:12:14.909 "data_offset": 2048, 00:12:14.909 "data_size": 63488 00:12:14.909 }, 00:12:14.909 { 00:12:14.909 "name": "BaseBdev2", 00:12:14.909 "uuid": "596b6fa8-4bcc-409c-bea4-8875be22659c", 00:12:14.909 "is_configured": true, 00:12:14.909 "data_offset": 2048, 00:12:14.909 "data_size": 63488 00:12:14.909 }, 00:12:14.909 { 00:12:14.909 "name": "BaseBdev3", 00:12:14.909 "uuid": "d099c8b4-7f4d-4817-859c-2a1de49db313", 00:12:14.909 "is_configured": true, 00:12:14.909 "data_offset": 2048, 00:12:14.909 "data_size": 63488 00:12:14.909 }, 00:12:14.909 { 00:12:14.909 "name": "BaseBdev4", 00:12:14.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.909 "is_configured": false, 00:12:14.909 "data_offset": 0, 00:12:14.909 "data_size": 0 00:12:14.909 } 00:12:14.909 ] 00:12:14.909 }' 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.909 14:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.476 [2024-11-20 14:29:16.341126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.476 [2024-11-20 14:29:16.341478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:15.476 [2024-11-20 14:29:16.341499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:15.476 BaseBdev4 00:12:15.476 [2024-11-20 14:29:16.341894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:15.476 [2024-11-20 14:29:16.342110] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:15.476 [2024-11-20 14:29:16.342132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:15.476 [2024-11-20 14:29:16.342315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.476 [ 00:12:15.476 { 00:12:15.476 "name": "BaseBdev4", 00:12:15.476 "aliases": [ 00:12:15.476 "4aa1ce9b-12a9-4d63-8447-1afa037bc091" 00:12:15.476 ], 00:12:15.476 "product_name": "Malloc disk", 00:12:15.476 "block_size": 512, 00:12:15.476 "num_blocks": 65536, 00:12:15.476 "uuid": "4aa1ce9b-12a9-4d63-8447-1afa037bc091", 00:12:15.476 "assigned_rate_limits": { 00:12:15.476 "rw_ios_per_sec": 0, 00:12:15.476 "rw_mbytes_per_sec": 0, 00:12:15.476 "r_mbytes_per_sec": 0, 00:12:15.476 "w_mbytes_per_sec": 0 00:12:15.476 }, 00:12:15.476 "claimed": true, 00:12:15.476 "claim_type": "exclusive_write", 00:12:15.476 "zoned": false, 00:12:15.476 "supported_io_types": { 00:12:15.476 "read": true, 00:12:15.476 "write": true, 00:12:15.476 "unmap": true, 00:12:15.476 "flush": true, 00:12:15.476 "reset": true, 00:12:15.476 "nvme_admin": false, 00:12:15.476 "nvme_io": false, 00:12:15.476 "nvme_io_md": false, 00:12:15.476 "write_zeroes": true, 00:12:15.476 "zcopy": true, 00:12:15.476 "get_zone_info": false, 00:12:15.476 "zone_management": false, 00:12:15.476 "zone_append": false, 00:12:15.476 "compare": false, 00:12:15.476 "compare_and_write": false, 00:12:15.476 "abort": true, 00:12:15.476 "seek_hole": false, 00:12:15.476 "seek_data": false, 00:12:15.476 "copy": true, 00:12:15.476 "nvme_iov_md": false 00:12:15.476 }, 00:12:15.476 "memory_domains": [ 00:12:15.476 { 00:12:15.476 "dma_device_id": "system", 00:12:15.476 "dma_device_type": 1 00:12:15.476 }, 00:12:15.476 { 00:12:15.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.476 "dma_device_type": 2 00:12:15.476 } 00:12:15.476 ], 00:12:15.476 "driver_specific": {} 00:12:15.476 } 00:12:15.476 ] 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.476 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.477 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.477 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.477 "name": "Existed_Raid", 00:12:15.477 "uuid": "fddaf08a-d1de-482a-8e78-6f99b17c3c3a", 00:12:15.477 "strip_size_kb": 64, 00:12:15.477 "state": "online", 00:12:15.477 "raid_level": "concat", 00:12:15.477 "superblock": true, 00:12:15.477 "num_base_bdevs": 4, 00:12:15.477 "num_base_bdevs_discovered": 4, 00:12:15.477 "num_base_bdevs_operational": 4, 00:12:15.477 "base_bdevs_list": [ 00:12:15.477 { 00:12:15.477 "name": "BaseBdev1", 00:12:15.477 "uuid": "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae", 00:12:15.477 "is_configured": true, 00:12:15.477 "data_offset": 2048, 00:12:15.477 "data_size": 63488 00:12:15.477 }, 00:12:15.477 { 00:12:15.477 "name": "BaseBdev2", 00:12:15.477 "uuid": "596b6fa8-4bcc-409c-bea4-8875be22659c", 00:12:15.477 "is_configured": true, 00:12:15.477 "data_offset": 2048, 00:12:15.477 "data_size": 63488 00:12:15.477 }, 00:12:15.477 { 00:12:15.477 "name": "BaseBdev3", 00:12:15.477 "uuid": "d099c8b4-7f4d-4817-859c-2a1de49db313", 00:12:15.477 "is_configured": true, 00:12:15.477 "data_offset": 2048, 00:12:15.477 "data_size": 63488 00:12:15.477 }, 00:12:15.477 { 00:12:15.477 "name": "BaseBdev4", 00:12:15.477 "uuid": "4aa1ce9b-12a9-4d63-8447-1afa037bc091", 00:12:15.477 "is_configured": true, 00:12:15.477 "data_offset": 2048, 00:12:15.477 "data_size": 63488 00:12:15.477 } 00:12:15.477 ] 00:12:15.477 }' 00:12:15.477 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.477 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.043 [2024-11-20 14:29:16.897790] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.043 "name": "Existed_Raid", 00:12:16.043 "aliases": [ 00:12:16.043 "fddaf08a-d1de-482a-8e78-6f99b17c3c3a" 00:12:16.043 ], 00:12:16.043 "product_name": "Raid Volume", 00:12:16.043 "block_size": 512, 00:12:16.043 "num_blocks": 253952, 00:12:16.043 "uuid": "fddaf08a-d1de-482a-8e78-6f99b17c3c3a", 00:12:16.043 "assigned_rate_limits": { 00:12:16.043 "rw_ios_per_sec": 0, 00:12:16.043 "rw_mbytes_per_sec": 0, 00:12:16.043 "r_mbytes_per_sec": 0, 00:12:16.043 "w_mbytes_per_sec": 0 00:12:16.043 }, 00:12:16.043 "claimed": false, 00:12:16.043 "zoned": false, 00:12:16.043 "supported_io_types": { 00:12:16.043 "read": true, 00:12:16.043 "write": true, 00:12:16.043 "unmap": true, 00:12:16.043 "flush": true, 00:12:16.043 "reset": true, 00:12:16.043 "nvme_admin": false, 00:12:16.043 "nvme_io": false, 00:12:16.043 "nvme_io_md": false, 00:12:16.043 "write_zeroes": true, 00:12:16.043 "zcopy": false, 00:12:16.043 "get_zone_info": false, 00:12:16.043 "zone_management": false, 00:12:16.043 "zone_append": false, 00:12:16.043 "compare": false, 00:12:16.043 "compare_and_write": false, 00:12:16.043 "abort": false, 00:12:16.043 "seek_hole": false, 00:12:16.043 "seek_data": false, 00:12:16.043 "copy": false, 00:12:16.043 "nvme_iov_md": false 00:12:16.043 }, 00:12:16.043 "memory_domains": [ 00:12:16.043 { 00:12:16.043 "dma_device_id": "system", 00:12:16.043 "dma_device_type": 1 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.043 "dma_device_type": 2 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "dma_device_id": "system", 00:12:16.043 "dma_device_type": 1 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.043 "dma_device_type": 2 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "dma_device_id": "system", 00:12:16.043 "dma_device_type": 1 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.043 "dma_device_type": 2 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "dma_device_id": "system", 00:12:16.043 "dma_device_type": 1 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.043 "dma_device_type": 2 00:12:16.043 } 00:12:16.043 ], 00:12:16.043 "driver_specific": { 00:12:16.043 "raid": { 00:12:16.043 "uuid": "fddaf08a-d1de-482a-8e78-6f99b17c3c3a", 00:12:16.043 "strip_size_kb": 64, 00:12:16.043 "state": "online", 00:12:16.043 "raid_level": "concat", 00:12:16.043 "superblock": true, 00:12:16.043 "num_base_bdevs": 4, 00:12:16.043 "num_base_bdevs_discovered": 4, 00:12:16.043 "num_base_bdevs_operational": 4, 00:12:16.043 "base_bdevs_list": [ 00:12:16.043 { 00:12:16.043 "name": "BaseBdev1", 00:12:16.043 "uuid": "b3d2c2fb-b6f5-4c02-96da-1a872ba086ae", 00:12:16.043 "is_configured": true, 00:12:16.043 "data_offset": 2048, 00:12:16.043 "data_size": 63488 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "name": "BaseBdev2", 00:12:16.043 "uuid": "596b6fa8-4bcc-409c-bea4-8875be22659c", 00:12:16.043 "is_configured": true, 00:12:16.043 "data_offset": 2048, 00:12:16.043 "data_size": 63488 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "name": "BaseBdev3", 00:12:16.043 "uuid": "d099c8b4-7f4d-4817-859c-2a1de49db313", 00:12:16.043 "is_configured": true, 00:12:16.043 "data_offset": 2048, 00:12:16.043 "data_size": 63488 00:12:16.043 }, 00:12:16.043 { 00:12:16.043 "name": "BaseBdev4", 00:12:16.043 "uuid": "4aa1ce9b-12a9-4d63-8447-1afa037bc091", 00:12:16.043 "is_configured": true, 00:12:16.043 "data_offset": 2048, 00:12:16.043 "data_size": 63488 00:12:16.043 } 00:12:16.043 ] 00:12:16.043 } 00:12:16.043 } 00:12:16.043 }' 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:16.043 BaseBdev2 00:12:16.043 BaseBdev3 00:12:16.043 BaseBdev4' 00:12:16.043 14:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.043 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.043 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.043 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:16.043 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.043 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.043 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.043 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.384 [2024-11-20 14:29:17.269503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.384 [2024-11-20 14:29:17.269548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.384 [2024-11-20 14:29:17.269640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.384 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.385 "name": "Existed_Raid", 00:12:16.385 "uuid": "fddaf08a-d1de-482a-8e78-6f99b17c3c3a", 00:12:16.385 "strip_size_kb": 64, 00:12:16.385 "state": "offline", 00:12:16.385 "raid_level": "concat", 00:12:16.385 "superblock": true, 00:12:16.385 "num_base_bdevs": 4, 00:12:16.385 "num_base_bdevs_discovered": 3, 00:12:16.385 "num_base_bdevs_operational": 3, 00:12:16.385 "base_bdevs_list": [ 00:12:16.385 { 00:12:16.385 "name": null, 00:12:16.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.385 "is_configured": false, 00:12:16.385 "data_offset": 0, 00:12:16.385 "data_size": 63488 00:12:16.385 }, 00:12:16.385 { 00:12:16.385 "name": "BaseBdev2", 00:12:16.385 "uuid": "596b6fa8-4bcc-409c-bea4-8875be22659c", 00:12:16.385 "is_configured": true, 00:12:16.385 "data_offset": 2048, 00:12:16.385 "data_size": 63488 00:12:16.385 }, 00:12:16.385 { 00:12:16.385 "name": "BaseBdev3", 00:12:16.385 "uuid": "d099c8b4-7f4d-4817-859c-2a1de49db313", 00:12:16.385 "is_configured": true, 00:12:16.385 "data_offset": 2048, 00:12:16.385 "data_size": 63488 00:12:16.385 }, 00:12:16.385 { 00:12:16.385 "name": "BaseBdev4", 00:12:16.385 "uuid": "4aa1ce9b-12a9-4d63-8447-1afa037bc091", 00:12:16.385 "is_configured": true, 00:12:16.385 "data_offset": 2048, 00:12:16.385 "data_size": 63488 00:12:16.385 } 00:12:16.385 ] 00:12:16.385 }' 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.385 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.953 14:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.953 [2024-11-20 14:29:17.926997] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.212 [2024-11-20 14:29:18.068756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.212 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.212 [2024-11-20 14:29:18.218468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:17.212 [2024-11-20 14:29:18.218543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.471 BaseBdev2 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:17.471 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.472 [ 00:12:17.472 { 00:12:17.472 "name": "BaseBdev2", 00:12:17.472 "aliases": [ 00:12:17.472 "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9" 00:12:17.472 ], 00:12:17.472 "product_name": "Malloc disk", 00:12:17.472 "block_size": 512, 00:12:17.472 "num_blocks": 65536, 00:12:17.472 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:17.472 "assigned_rate_limits": { 00:12:17.472 "rw_ios_per_sec": 0, 00:12:17.472 "rw_mbytes_per_sec": 0, 00:12:17.472 "r_mbytes_per_sec": 0, 00:12:17.472 "w_mbytes_per_sec": 0 00:12:17.472 }, 00:12:17.472 "claimed": false, 00:12:17.472 "zoned": false, 00:12:17.472 "supported_io_types": { 00:12:17.472 "read": true, 00:12:17.472 "write": true, 00:12:17.472 "unmap": true, 00:12:17.472 "flush": true, 00:12:17.472 "reset": true, 00:12:17.472 "nvme_admin": false, 00:12:17.472 "nvme_io": false, 00:12:17.472 "nvme_io_md": false, 00:12:17.472 "write_zeroes": true, 00:12:17.472 "zcopy": true, 00:12:17.472 "get_zone_info": false, 00:12:17.472 "zone_management": false, 00:12:17.472 "zone_append": false, 00:12:17.472 "compare": false, 00:12:17.472 "compare_and_write": false, 00:12:17.472 "abort": true, 00:12:17.472 "seek_hole": false, 00:12:17.472 "seek_data": false, 00:12:17.472 "copy": true, 00:12:17.472 "nvme_iov_md": false 00:12:17.472 }, 00:12:17.472 "memory_domains": [ 00:12:17.472 { 00:12:17.472 "dma_device_id": "system", 00:12:17.472 "dma_device_type": 1 00:12:17.472 }, 00:12:17.472 { 00:12:17.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.472 "dma_device_type": 2 00:12:17.472 } 00:12:17.472 ], 00:12:17.472 "driver_specific": {} 00:12:17.472 } 00:12:17.472 ] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.472 BaseBdev3 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.472 [ 00:12:17.472 { 00:12:17.472 "name": "BaseBdev3", 00:12:17.472 "aliases": [ 00:12:17.472 "ec999172-8de4-4e47-8ced-a67dd30eb8f5" 00:12:17.472 ], 00:12:17.472 "product_name": "Malloc disk", 00:12:17.472 "block_size": 512, 00:12:17.472 "num_blocks": 65536, 00:12:17.472 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:17.472 "assigned_rate_limits": { 00:12:17.472 "rw_ios_per_sec": 0, 00:12:17.472 "rw_mbytes_per_sec": 0, 00:12:17.472 "r_mbytes_per_sec": 0, 00:12:17.472 "w_mbytes_per_sec": 0 00:12:17.472 }, 00:12:17.472 "claimed": false, 00:12:17.472 "zoned": false, 00:12:17.472 "supported_io_types": { 00:12:17.472 "read": true, 00:12:17.472 "write": true, 00:12:17.472 "unmap": true, 00:12:17.472 "flush": true, 00:12:17.472 "reset": true, 00:12:17.472 "nvme_admin": false, 00:12:17.472 "nvme_io": false, 00:12:17.472 "nvme_io_md": false, 00:12:17.472 "write_zeroes": true, 00:12:17.472 "zcopy": true, 00:12:17.472 "get_zone_info": false, 00:12:17.472 "zone_management": false, 00:12:17.472 "zone_append": false, 00:12:17.472 "compare": false, 00:12:17.472 "compare_and_write": false, 00:12:17.472 "abort": true, 00:12:17.472 "seek_hole": false, 00:12:17.472 "seek_data": false, 00:12:17.472 "copy": true, 00:12:17.472 "nvme_iov_md": false 00:12:17.472 }, 00:12:17.472 "memory_domains": [ 00:12:17.472 { 00:12:17.472 "dma_device_id": "system", 00:12:17.472 "dma_device_type": 1 00:12:17.472 }, 00:12:17.472 { 00:12:17.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.472 "dma_device_type": 2 00:12:17.472 } 00:12:17.472 ], 00:12:17.472 "driver_specific": {} 00:12:17.472 } 00:12:17.472 ] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.472 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.731 BaseBdev4 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.731 [ 00:12:17.731 { 00:12:17.731 "name": "BaseBdev4", 00:12:17.731 "aliases": [ 00:12:17.731 "0e5bc67a-ba55-49b6-8339-98d4be499579" 00:12:17.731 ], 00:12:17.731 "product_name": "Malloc disk", 00:12:17.731 "block_size": 512, 00:12:17.731 "num_blocks": 65536, 00:12:17.731 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:17.731 "assigned_rate_limits": { 00:12:17.731 "rw_ios_per_sec": 0, 00:12:17.731 "rw_mbytes_per_sec": 0, 00:12:17.731 "r_mbytes_per_sec": 0, 00:12:17.731 "w_mbytes_per_sec": 0 00:12:17.731 }, 00:12:17.731 "claimed": false, 00:12:17.731 "zoned": false, 00:12:17.731 "supported_io_types": { 00:12:17.731 "read": true, 00:12:17.731 "write": true, 00:12:17.731 "unmap": true, 00:12:17.731 "flush": true, 00:12:17.731 "reset": true, 00:12:17.731 "nvme_admin": false, 00:12:17.731 "nvme_io": false, 00:12:17.731 "nvme_io_md": false, 00:12:17.731 "write_zeroes": true, 00:12:17.731 "zcopy": true, 00:12:17.731 "get_zone_info": false, 00:12:17.731 "zone_management": false, 00:12:17.731 "zone_append": false, 00:12:17.731 "compare": false, 00:12:17.731 "compare_and_write": false, 00:12:17.731 "abort": true, 00:12:17.731 "seek_hole": false, 00:12:17.731 "seek_data": false, 00:12:17.731 "copy": true, 00:12:17.731 "nvme_iov_md": false 00:12:17.731 }, 00:12:17.731 "memory_domains": [ 00:12:17.731 { 00:12:17.731 "dma_device_id": "system", 00:12:17.731 "dma_device_type": 1 00:12:17.731 }, 00:12:17.731 { 00:12:17.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.731 "dma_device_type": 2 00:12:17.731 } 00:12:17.731 ], 00:12:17.731 "driver_specific": {} 00:12:17.731 } 00:12:17.731 ] 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.731 [2024-11-20 14:29:18.595179] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.731 [2024-11-20 14:29:18.595240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.731 [2024-11-20 14:29:18.595278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.731 [2024-11-20 14:29:18.597853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.731 [2024-11-20 14:29:18.597953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.731 "name": "Existed_Raid", 00:12:17.731 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:17.731 "strip_size_kb": 64, 00:12:17.731 "state": "configuring", 00:12:17.731 "raid_level": "concat", 00:12:17.731 "superblock": true, 00:12:17.731 "num_base_bdevs": 4, 00:12:17.731 "num_base_bdevs_discovered": 3, 00:12:17.731 "num_base_bdevs_operational": 4, 00:12:17.731 "base_bdevs_list": [ 00:12:17.731 { 00:12:17.731 "name": "BaseBdev1", 00:12:17.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.731 "is_configured": false, 00:12:17.731 "data_offset": 0, 00:12:17.731 "data_size": 0 00:12:17.731 }, 00:12:17.731 { 00:12:17.731 "name": "BaseBdev2", 00:12:17.731 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:17.731 "is_configured": true, 00:12:17.731 "data_offset": 2048, 00:12:17.731 "data_size": 63488 00:12:17.731 }, 00:12:17.731 { 00:12:17.731 "name": "BaseBdev3", 00:12:17.731 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:17.731 "is_configured": true, 00:12:17.731 "data_offset": 2048, 00:12:17.731 "data_size": 63488 00:12:17.731 }, 00:12:17.731 { 00:12:17.731 "name": "BaseBdev4", 00:12:17.731 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:17.731 "is_configured": true, 00:12:17.731 "data_offset": 2048, 00:12:17.731 "data_size": 63488 00:12:17.731 } 00:12:17.731 ] 00:12:17.731 }' 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.731 14:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.298 [2024-11-20 14:29:19.111304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.298 "name": "Existed_Raid", 00:12:18.298 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:18.298 "strip_size_kb": 64, 00:12:18.298 "state": "configuring", 00:12:18.298 "raid_level": "concat", 00:12:18.298 "superblock": true, 00:12:18.298 "num_base_bdevs": 4, 00:12:18.298 "num_base_bdevs_discovered": 2, 00:12:18.298 "num_base_bdevs_operational": 4, 00:12:18.298 "base_bdevs_list": [ 00:12:18.298 { 00:12:18.298 "name": "BaseBdev1", 00:12:18.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.298 "is_configured": false, 00:12:18.298 "data_offset": 0, 00:12:18.298 "data_size": 0 00:12:18.298 }, 00:12:18.298 { 00:12:18.298 "name": null, 00:12:18.298 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:18.298 "is_configured": false, 00:12:18.298 "data_offset": 0, 00:12:18.298 "data_size": 63488 00:12:18.298 }, 00:12:18.298 { 00:12:18.298 "name": "BaseBdev3", 00:12:18.298 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:18.298 "is_configured": true, 00:12:18.298 "data_offset": 2048, 00:12:18.298 "data_size": 63488 00:12:18.298 }, 00:12:18.298 { 00:12:18.298 "name": "BaseBdev4", 00:12:18.298 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:18.298 "is_configured": true, 00:12:18.298 "data_offset": 2048, 00:12:18.298 "data_size": 63488 00:12:18.298 } 00:12:18.298 ] 00:12:18.298 }' 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.298 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.865 [2024-11-20 14:29:19.730122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.865 BaseBdev1 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.865 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.866 [ 00:12:18.866 { 00:12:18.866 "name": "BaseBdev1", 00:12:18.866 "aliases": [ 00:12:18.866 "0599d668-2b17-469b-bb58-6b8809696a0f" 00:12:18.866 ], 00:12:18.866 "product_name": "Malloc disk", 00:12:18.866 "block_size": 512, 00:12:18.866 "num_blocks": 65536, 00:12:18.866 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:18.866 "assigned_rate_limits": { 00:12:18.866 "rw_ios_per_sec": 0, 00:12:18.866 "rw_mbytes_per_sec": 0, 00:12:18.866 "r_mbytes_per_sec": 0, 00:12:18.866 "w_mbytes_per_sec": 0 00:12:18.866 }, 00:12:18.866 "claimed": true, 00:12:18.866 "claim_type": "exclusive_write", 00:12:18.866 "zoned": false, 00:12:18.866 "supported_io_types": { 00:12:18.866 "read": true, 00:12:18.866 "write": true, 00:12:18.866 "unmap": true, 00:12:18.866 "flush": true, 00:12:18.866 "reset": true, 00:12:18.866 "nvme_admin": false, 00:12:18.866 "nvme_io": false, 00:12:18.866 "nvme_io_md": false, 00:12:18.866 "write_zeroes": true, 00:12:18.866 "zcopy": true, 00:12:18.866 "get_zone_info": false, 00:12:18.866 "zone_management": false, 00:12:18.866 "zone_append": false, 00:12:18.866 "compare": false, 00:12:18.866 "compare_and_write": false, 00:12:18.866 "abort": true, 00:12:18.866 "seek_hole": false, 00:12:18.866 "seek_data": false, 00:12:18.866 "copy": true, 00:12:18.866 "nvme_iov_md": false 00:12:18.866 }, 00:12:18.866 "memory_domains": [ 00:12:18.866 { 00:12:18.866 "dma_device_id": "system", 00:12:18.866 "dma_device_type": 1 00:12:18.866 }, 00:12:18.866 { 00:12:18.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.866 "dma_device_type": 2 00:12:18.866 } 00:12:18.866 ], 00:12:18.866 "driver_specific": {} 00:12:18.866 } 00:12:18.866 ] 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.866 "name": "Existed_Raid", 00:12:18.866 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:18.866 "strip_size_kb": 64, 00:12:18.866 "state": "configuring", 00:12:18.866 "raid_level": "concat", 00:12:18.866 "superblock": true, 00:12:18.866 "num_base_bdevs": 4, 00:12:18.866 "num_base_bdevs_discovered": 3, 00:12:18.866 "num_base_bdevs_operational": 4, 00:12:18.866 "base_bdevs_list": [ 00:12:18.866 { 00:12:18.866 "name": "BaseBdev1", 00:12:18.866 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:18.866 "is_configured": true, 00:12:18.866 "data_offset": 2048, 00:12:18.866 "data_size": 63488 00:12:18.866 }, 00:12:18.866 { 00:12:18.866 "name": null, 00:12:18.866 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:18.866 "is_configured": false, 00:12:18.866 "data_offset": 0, 00:12:18.866 "data_size": 63488 00:12:18.866 }, 00:12:18.866 { 00:12:18.866 "name": "BaseBdev3", 00:12:18.866 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:18.866 "is_configured": true, 00:12:18.866 "data_offset": 2048, 00:12:18.866 "data_size": 63488 00:12:18.866 }, 00:12:18.866 { 00:12:18.866 "name": "BaseBdev4", 00:12:18.866 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:18.866 "is_configured": true, 00:12:18.866 "data_offset": 2048, 00:12:18.866 "data_size": 63488 00:12:18.866 } 00:12:18.866 ] 00:12:18.866 }' 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.866 14:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.432 [2024-11-20 14:29:20.354364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.432 "name": "Existed_Raid", 00:12:19.432 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:19.432 "strip_size_kb": 64, 00:12:19.432 "state": "configuring", 00:12:19.432 "raid_level": "concat", 00:12:19.432 "superblock": true, 00:12:19.432 "num_base_bdevs": 4, 00:12:19.432 "num_base_bdevs_discovered": 2, 00:12:19.432 "num_base_bdevs_operational": 4, 00:12:19.432 "base_bdevs_list": [ 00:12:19.432 { 00:12:19.432 "name": "BaseBdev1", 00:12:19.432 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:19.432 "is_configured": true, 00:12:19.432 "data_offset": 2048, 00:12:19.432 "data_size": 63488 00:12:19.432 }, 00:12:19.432 { 00:12:19.432 "name": null, 00:12:19.432 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:19.432 "is_configured": false, 00:12:19.432 "data_offset": 0, 00:12:19.432 "data_size": 63488 00:12:19.432 }, 00:12:19.432 { 00:12:19.432 "name": null, 00:12:19.432 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:19.432 "is_configured": false, 00:12:19.432 "data_offset": 0, 00:12:19.432 "data_size": 63488 00:12:19.432 }, 00:12:19.432 { 00:12:19.432 "name": "BaseBdev4", 00:12:19.432 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:19.432 "is_configured": true, 00:12:19.432 "data_offset": 2048, 00:12:19.432 "data_size": 63488 00:12:19.432 } 00:12:19.432 ] 00:12:19.432 }' 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.432 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.001 [2024-11-20 14:29:20.982501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.001 14:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.001 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.001 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.001 "name": "Existed_Raid", 00:12:20.001 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:20.001 "strip_size_kb": 64, 00:12:20.001 "state": "configuring", 00:12:20.001 "raid_level": "concat", 00:12:20.001 "superblock": true, 00:12:20.001 "num_base_bdevs": 4, 00:12:20.001 "num_base_bdevs_discovered": 3, 00:12:20.001 "num_base_bdevs_operational": 4, 00:12:20.001 "base_bdevs_list": [ 00:12:20.001 { 00:12:20.001 "name": "BaseBdev1", 00:12:20.001 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:20.001 "is_configured": true, 00:12:20.001 "data_offset": 2048, 00:12:20.001 "data_size": 63488 00:12:20.001 }, 00:12:20.001 { 00:12:20.001 "name": null, 00:12:20.001 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:20.001 "is_configured": false, 00:12:20.002 "data_offset": 0, 00:12:20.002 "data_size": 63488 00:12:20.002 }, 00:12:20.002 { 00:12:20.002 "name": "BaseBdev3", 00:12:20.002 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:20.002 "is_configured": true, 00:12:20.002 "data_offset": 2048, 00:12:20.002 "data_size": 63488 00:12:20.002 }, 00:12:20.002 { 00:12:20.002 "name": "BaseBdev4", 00:12:20.002 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:20.002 "is_configured": true, 00:12:20.002 "data_offset": 2048, 00:12:20.002 "data_size": 63488 00:12:20.002 } 00:12:20.002 ] 00:12:20.002 }' 00:12:20.002 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.002 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.573 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.573 [2024-11-20 14:29:21.610816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.833 "name": "Existed_Raid", 00:12:20.833 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:20.833 "strip_size_kb": 64, 00:12:20.833 "state": "configuring", 00:12:20.833 "raid_level": "concat", 00:12:20.833 "superblock": true, 00:12:20.833 "num_base_bdevs": 4, 00:12:20.833 "num_base_bdevs_discovered": 2, 00:12:20.833 "num_base_bdevs_operational": 4, 00:12:20.833 "base_bdevs_list": [ 00:12:20.833 { 00:12:20.833 "name": null, 00:12:20.833 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:20.833 "is_configured": false, 00:12:20.833 "data_offset": 0, 00:12:20.833 "data_size": 63488 00:12:20.833 }, 00:12:20.833 { 00:12:20.833 "name": null, 00:12:20.833 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:20.833 "is_configured": false, 00:12:20.833 "data_offset": 0, 00:12:20.833 "data_size": 63488 00:12:20.833 }, 00:12:20.833 { 00:12:20.833 "name": "BaseBdev3", 00:12:20.833 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:20.833 "is_configured": true, 00:12:20.833 "data_offset": 2048, 00:12:20.833 "data_size": 63488 00:12:20.833 }, 00:12:20.833 { 00:12:20.833 "name": "BaseBdev4", 00:12:20.833 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:20.833 "is_configured": true, 00:12:20.833 "data_offset": 2048, 00:12:20.833 "data_size": 63488 00:12:20.833 } 00:12:20.833 ] 00:12:20.833 }' 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.833 14:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.399 [2024-11-20 14:29:22.259668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.399 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.400 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.400 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.400 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.400 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.400 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.400 "name": "Existed_Raid", 00:12:21.400 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:21.400 "strip_size_kb": 64, 00:12:21.400 "state": "configuring", 00:12:21.400 "raid_level": "concat", 00:12:21.400 "superblock": true, 00:12:21.400 "num_base_bdevs": 4, 00:12:21.400 "num_base_bdevs_discovered": 3, 00:12:21.400 "num_base_bdevs_operational": 4, 00:12:21.400 "base_bdevs_list": [ 00:12:21.400 { 00:12:21.400 "name": null, 00:12:21.400 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:21.400 "is_configured": false, 00:12:21.400 "data_offset": 0, 00:12:21.400 "data_size": 63488 00:12:21.400 }, 00:12:21.400 { 00:12:21.400 "name": "BaseBdev2", 00:12:21.400 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:21.400 "is_configured": true, 00:12:21.400 "data_offset": 2048, 00:12:21.400 "data_size": 63488 00:12:21.400 }, 00:12:21.400 { 00:12:21.400 "name": "BaseBdev3", 00:12:21.400 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:21.400 "is_configured": true, 00:12:21.400 "data_offset": 2048, 00:12:21.400 "data_size": 63488 00:12:21.400 }, 00:12:21.400 { 00:12:21.400 "name": "BaseBdev4", 00:12:21.400 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:21.400 "is_configured": true, 00:12:21.400 "data_offset": 2048, 00:12:21.400 "data_size": 63488 00:12:21.400 } 00:12:21.400 ] 00:12:21.400 }' 00:12:21.400 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.400 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0599d668-2b17-469b-bb58-6b8809696a0f 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 [2024-11-20 14:29:22.891150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:21.968 [2024-11-20 14:29:22.891455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:21.968 [2024-11-20 14:29:22.891474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:21.968 [2024-11-20 14:29:22.891834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:21.968 [2024-11-20 14:29:22.892018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:21.968 [2024-11-20 14:29:22.892045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:21.968 [2024-11-20 14:29:22.892209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.968 NewBaseBdev 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 [ 00:12:21.968 { 00:12:21.968 "name": "NewBaseBdev", 00:12:21.968 "aliases": [ 00:12:21.968 "0599d668-2b17-469b-bb58-6b8809696a0f" 00:12:21.968 ], 00:12:21.968 "product_name": "Malloc disk", 00:12:21.968 "block_size": 512, 00:12:21.968 "num_blocks": 65536, 00:12:21.968 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:21.968 "assigned_rate_limits": { 00:12:21.968 "rw_ios_per_sec": 0, 00:12:21.968 "rw_mbytes_per_sec": 0, 00:12:21.968 "r_mbytes_per_sec": 0, 00:12:21.968 "w_mbytes_per_sec": 0 00:12:21.968 }, 00:12:21.968 "claimed": true, 00:12:21.968 "claim_type": "exclusive_write", 00:12:21.968 "zoned": false, 00:12:21.968 "supported_io_types": { 00:12:21.968 "read": true, 00:12:21.968 "write": true, 00:12:21.968 "unmap": true, 00:12:21.968 "flush": true, 00:12:21.968 "reset": true, 00:12:21.968 "nvme_admin": false, 00:12:21.968 "nvme_io": false, 00:12:21.968 "nvme_io_md": false, 00:12:21.968 "write_zeroes": true, 00:12:21.968 "zcopy": true, 00:12:21.968 "get_zone_info": false, 00:12:21.968 "zone_management": false, 00:12:21.968 "zone_append": false, 00:12:21.968 "compare": false, 00:12:21.968 "compare_and_write": false, 00:12:21.968 "abort": true, 00:12:21.968 "seek_hole": false, 00:12:21.968 "seek_data": false, 00:12:21.968 "copy": true, 00:12:21.968 "nvme_iov_md": false 00:12:21.968 }, 00:12:21.968 "memory_domains": [ 00:12:21.968 { 00:12:21.968 "dma_device_id": "system", 00:12:21.968 "dma_device_type": 1 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.968 "dma_device_type": 2 00:12:21.968 } 00:12:21.968 ], 00:12:21.968 "driver_specific": {} 00:12:21.968 } 00:12:21.968 ] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.968 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.968 "name": "Existed_Raid", 00:12:21.968 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:21.968 "strip_size_kb": 64, 00:12:21.968 "state": "online", 00:12:21.968 "raid_level": "concat", 00:12:21.968 "superblock": true, 00:12:21.968 "num_base_bdevs": 4, 00:12:21.968 "num_base_bdevs_discovered": 4, 00:12:21.968 "num_base_bdevs_operational": 4, 00:12:21.968 "base_bdevs_list": [ 00:12:21.968 { 00:12:21.968 "name": "NewBaseBdev", 00:12:21.968 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": "BaseBdev2", 00:12:21.968 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": "BaseBdev3", 00:12:21.968 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:21.968 "is_configured": true, 00:12:21.968 "data_offset": 2048, 00:12:21.968 "data_size": 63488 00:12:21.968 }, 00:12:21.968 { 00:12:21.968 "name": "BaseBdev4", 00:12:21.969 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:21.969 "is_configured": true, 00:12:21.969 "data_offset": 2048, 00:12:21.969 "data_size": 63488 00:12:21.969 } 00:12:21.969 ] 00:12:21.969 }' 00:12:21.969 14:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.969 14:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.536 [2024-11-20 14:29:23.459832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.536 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.536 "name": "Existed_Raid", 00:12:22.536 "aliases": [ 00:12:22.536 "4cbb78ed-d424-4e00-893c-130c13d4502b" 00:12:22.536 ], 00:12:22.536 "product_name": "Raid Volume", 00:12:22.536 "block_size": 512, 00:12:22.536 "num_blocks": 253952, 00:12:22.536 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:22.536 "assigned_rate_limits": { 00:12:22.536 "rw_ios_per_sec": 0, 00:12:22.536 "rw_mbytes_per_sec": 0, 00:12:22.536 "r_mbytes_per_sec": 0, 00:12:22.536 "w_mbytes_per_sec": 0 00:12:22.536 }, 00:12:22.537 "claimed": false, 00:12:22.537 "zoned": false, 00:12:22.537 "supported_io_types": { 00:12:22.537 "read": true, 00:12:22.537 "write": true, 00:12:22.537 "unmap": true, 00:12:22.537 "flush": true, 00:12:22.537 "reset": true, 00:12:22.537 "nvme_admin": false, 00:12:22.537 "nvme_io": false, 00:12:22.537 "nvme_io_md": false, 00:12:22.537 "write_zeroes": true, 00:12:22.537 "zcopy": false, 00:12:22.537 "get_zone_info": false, 00:12:22.537 "zone_management": false, 00:12:22.537 "zone_append": false, 00:12:22.537 "compare": false, 00:12:22.537 "compare_and_write": false, 00:12:22.537 "abort": false, 00:12:22.537 "seek_hole": false, 00:12:22.537 "seek_data": false, 00:12:22.537 "copy": false, 00:12:22.537 "nvme_iov_md": false 00:12:22.537 }, 00:12:22.537 "memory_domains": [ 00:12:22.537 { 00:12:22.537 "dma_device_id": "system", 00:12:22.537 "dma_device_type": 1 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.537 "dma_device_type": 2 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "dma_device_id": "system", 00:12:22.537 "dma_device_type": 1 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.537 "dma_device_type": 2 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "dma_device_id": "system", 00:12:22.537 "dma_device_type": 1 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.537 "dma_device_type": 2 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "dma_device_id": "system", 00:12:22.537 "dma_device_type": 1 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.537 "dma_device_type": 2 00:12:22.537 } 00:12:22.537 ], 00:12:22.537 "driver_specific": { 00:12:22.537 "raid": { 00:12:22.537 "uuid": "4cbb78ed-d424-4e00-893c-130c13d4502b", 00:12:22.537 "strip_size_kb": 64, 00:12:22.537 "state": "online", 00:12:22.537 "raid_level": "concat", 00:12:22.537 "superblock": true, 00:12:22.537 "num_base_bdevs": 4, 00:12:22.537 "num_base_bdevs_discovered": 4, 00:12:22.537 "num_base_bdevs_operational": 4, 00:12:22.537 "base_bdevs_list": [ 00:12:22.537 { 00:12:22.537 "name": "NewBaseBdev", 00:12:22.537 "uuid": "0599d668-2b17-469b-bb58-6b8809696a0f", 00:12:22.537 "is_configured": true, 00:12:22.537 "data_offset": 2048, 00:12:22.537 "data_size": 63488 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "name": "BaseBdev2", 00:12:22.537 "uuid": "1db3702f-ec1d-4e91-b2fa-91f6343ce8e9", 00:12:22.537 "is_configured": true, 00:12:22.537 "data_offset": 2048, 00:12:22.537 "data_size": 63488 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "name": "BaseBdev3", 00:12:22.537 "uuid": "ec999172-8de4-4e47-8ced-a67dd30eb8f5", 00:12:22.537 "is_configured": true, 00:12:22.537 "data_offset": 2048, 00:12:22.537 "data_size": 63488 00:12:22.537 }, 00:12:22.537 { 00:12:22.537 "name": "BaseBdev4", 00:12:22.537 "uuid": "0e5bc67a-ba55-49b6-8339-98d4be499579", 00:12:22.537 "is_configured": true, 00:12:22.537 "data_offset": 2048, 00:12:22.537 "data_size": 63488 00:12:22.537 } 00:12:22.537 ] 00:12:22.537 } 00:12:22.537 } 00:12:22.537 }' 00:12:22.537 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.537 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:22.537 BaseBdev2 00:12:22.537 BaseBdev3 00:12:22.537 BaseBdev4' 00:12:22.537 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.797 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.797 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.797 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.797 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.798 [2024-11-20 14:29:23.835459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:22.798 [2024-11-20 14:29:23.835503] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.798 [2024-11-20 14:29:23.835639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.798 [2024-11-20 14:29:23.835743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.798 [2024-11-20 14:29:23.835761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72160 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72160 ']' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72160 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.798 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72160 00:12:23.057 killing process with pid 72160 00:12:23.057 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.057 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.057 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72160' 00:12:23.057 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72160 00:12:23.057 [2024-11-20 14:29:23.875009] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.057 14:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72160 00:12:23.316 [2024-11-20 14:29:24.235088] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.694 ************************************ 00:12:24.694 END TEST raid_state_function_test_sb 00:12:24.694 ************************************ 00:12:24.694 14:29:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:24.694 00:12:24.694 real 0m12.923s 00:12:24.694 user 0m21.336s 00:12:24.694 sys 0m1.897s 00:12:24.694 14:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.694 14:29:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.694 14:29:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:24.694 14:29:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.694 14:29:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.694 14:29:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.694 ************************************ 00:12:24.694 START TEST raid_superblock_test 00:12:24.694 ************************************ 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72842 00:12:24.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72842 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72842 ']' 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.694 14:29:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.694 [2024-11-20 14:29:25.464137] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:12:24.694 [2024-11-20 14:29:25.464554] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72842 ] 00:12:24.694 [2024-11-20 14:29:25.646275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.953 [2024-11-20 14:29:25.810657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.213 [2024-11-20 14:29:26.023580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.213 [2024-11-20 14:29:26.023651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 malloc1 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 [2024-11-20 14:29:26.600702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.786 [2024-11-20 14:29:26.600920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.786 [2024-11-20 14:29:26.600968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:25.786 [2024-11-20 14:29:26.600987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.786 [2024-11-20 14:29:26.603939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.786 [2024-11-20 14:29:26.604109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.786 pt1 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 malloc2 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 [2024-11-20 14:29:26.658530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.786 [2024-11-20 14:29:26.658613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.786 [2024-11-20 14:29:26.658671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:25.786 [2024-11-20 14:29:26.658690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.786 [2024-11-20 14:29:26.661722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.786 [2024-11-20 14:29:26.661772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.786 pt2 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 malloc3 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 [2024-11-20 14:29:26.729538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.786 [2024-11-20 14:29:26.729621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.786 [2024-11-20 14:29:26.729683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:25.786 [2024-11-20 14:29:26.729703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.786 [2024-11-20 14:29:26.732758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.786 [2024-11-20 14:29:26.732806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.786 pt3 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 malloc4 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 [2024-11-20 14:29:26.787174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:25.786 [2024-11-20 14:29:26.787429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.786 [2024-11-20 14:29:26.787483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:25.786 [2024-11-20 14:29:26.787506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.786 [2024-11-20 14:29:26.790503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.786 [2024-11-20 14:29:26.790686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:25.786 pt4 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 [2024-11-20 14:29:26.795365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.786 [2024-11-20 14:29:26.797928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.786 [2024-11-20 14:29:26.798194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.786 [2024-11-20 14:29:26.798280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:25.786 [2024-11-20 14:29:26.798551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:25.786 [2024-11-20 14:29:26.798571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:25.786 [2024-11-20 14:29:26.798941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.786 [2024-11-20 14:29:26.799165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:25.786 [2024-11-20 14:29:26.799188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:25.786 [2024-11-20 14:29:26.799443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.786 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.045 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.045 "name": "raid_bdev1", 00:12:26.045 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:26.045 "strip_size_kb": 64, 00:12:26.045 "state": "online", 00:12:26.045 "raid_level": "concat", 00:12:26.045 "superblock": true, 00:12:26.045 "num_base_bdevs": 4, 00:12:26.045 "num_base_bdevs_discovered": 4, 00:12:26.045 "num_base_bdevs_operational": 4, 00:12:26.045 "base_bdevs_list": [ 00:12:26.045 { 00:12:26.045 "name": "pt1", 00:12:26.045 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.045 "is_configured": true, 00:12:26.045 "data_offset": 2048, 00:12:26.045 "data_size": 63488 00:12:26.045 }, 00:12:26.045 { 00:12:26.045 "name": "pt2", 00:12:26.045 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.045 "is_configured": true, 00:12:26.045 "data_offset": 2048, 00:12:26.045 "data_size": 63488 00:12:26.045 }, 00:12:26.045 { 00:12:26.045 "name": "pt3", 00:12:26.045 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.045 "is_configured": true, 00:12:26.045 "data_offset": 2048, 00:12:26.045 "data_size": 63488 00:12:26.045 }, 00:12:26.045 { 00:12:26.045 "name": "pt4", 00:12:26.045 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.045 "is_configured": true, 00:12:26.045 "data_offset": 2048, 00:12:26.045 "data_size": 63488 00:12:26.045 } 00:12:26.045 ] 00:12:26.045 }' 00:12:26.045 14:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.045 14:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.303 [2024-11-20 14:29:27.336003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.303 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.561 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.561 "name": "raid_bdev1", 00:12:26.561 "aliases": [ 00:12:26.561 "a8634aba-ccfb-4ffd-a05a-49a696ce7d70" 00:12:26.561 ], 00:12:26.561 "product_name": "Raid Volume", 00:12:26.561 "block_size": 512, 00:12:26.561 "num_blocks": 253952, 00:12:26.561 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:26.562 "assigned_rate_limits": { 00:12:26.562 "rw_ios_per_sec": 0, 00:12:26.562 "rw_mbytes_per_sec": 0, 00:12:26.562 "r_mbytes_per_sec": 0, 00:12:26.562 "w_mbytes_per_sec": 0 00:12:26.562 }, 00:12:26.562 "claimed": false, 00:12:26.562 "zoned": false, 00:12:26.562 "supported_io_types": { 00:12:26.562 "read": true, 00:12:26.562 "write": true, 00:12:26.562 "unmap": true, 00:12:26.562 "flush": true, 00:12:26.562 "reset": true, 00:12:26.562 "nvme_admin": false, 00:12:26.562 "nvme_io": false, 00:12:26.562 "nvme_io_md": false, 00:12:26.562 "write_zeroes": true, 00:12:26.562 "zcopy": false, 00:12:26.562 "get_zone_info": false, 00:12:26.562 "zone_management": false, 00:12:26.562 "zone_append": false, 00:12:26.562 "compare": false, 00:12:26.562 "compare_and_write": false, 00:12:26.562 "abort": false, 00:12:26.562 "seek_hole": false, 00:12:26.562 "seek_data": false, 00:12:26.562 "copy": false, 00:12:26.562 "nvme_iov_md": false 00:12:26.562 }, 00:12:26.562 "memory_domains": [ 00:12:26.562 { 00:12:26.562 "dma_device_id": "system", 00:12:26.562 "dma_device_type": 1 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.562 "dma_device_type": 2 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "system", 00:12:26.562 "dma_device_type": 1 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.562 "dma_device_type": 2 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "system", 00:12:26.562 "dma_device_type": 1 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.562 "dma_device_type": 2 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "system", 00:12:26.562 "dma_device_type": 1 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.562 "dma_device_type": 2 00:12:26.562 } 00:12:26.562 ], 00:12:26.562 "driver_specific": { 00:12:26.562 "raid": { 00:12:26.562 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:26.562 "strip_size_kb": 64, 00:12:26.562 "state": "online", 00:12:26.562 "raid_level": "concat", 00:12:26.562 "superblock": true, 00:12:26.562 "num_base_bdevs": 4, 00:12:26.562 "num_base_bdevs_discovered": 4, 00:12:26.562 "num_base_bdevs_operational": 4, 00:12:26.562 "base_bdevs_list": [ 00:12:26.562 { 00:12:26.562 "name": "pt1", 00:12:26.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.562 "is_configured": true, 00:12:26.562 "data_offset": 2048, 00:12:26.562 "data_size": 63488 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "name": "pt2", 00:12:26.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.562 "is_configured": true, 00:12:26.562 "data_offset": 2048, 00:12:26.562 "data_size": 63488 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "name": "pt3", 00:12:26.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.562 "is_configured": true, 00:12:26.562 "data_offset": 2048, 00:12:26.562 "data_size": 63488 00:12:26.562 }, 00:12:26.562 { 00:12:26.562 "name": "pt4", 00:12:26.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.562 "is_configured": true, 00:12:26.562 "data_offset": 2048, 00:12:26.562 "data_size": 63488 00:12:26.562 } 00:12:26.562 ] 00:12:26.562 } 00:12:26.562 } 00:12:26.562 }' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:26.562 pt2 00:12:26.562 pt3 00:12:26.562 pt4' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.562 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 [2024-11-20 14:29:27.708016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a8634aba-ccfb-4ffd-a05a-49a696ce7d70 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a8634aba-ccfb-4ffd-a05a-49a696ce7d70 ']' 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 [2024-11-20 14:29:27.755645] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.821 [2024-11-20 14:29:27.755800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.821 [2024-11-20 14:29:27.755935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.821 [2024-11-20 14:29:27.756035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.821 [2024-11-20 14:29:27.756061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:26.821 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:26.822 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.822 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.822 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.822 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:26.822 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:26.822 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.822 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 [2024-11-20 14:29:27.911737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:27.081 [2024-11-20 14:29:27.914361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:27.081 [2024-11-20 14:29:27.914433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:27.081 [2024-11-20 14:29:27.914492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:27.081 [2024-11-20 14:29:27.914574] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:27.081 [2024-11-20 14:29:27.914682] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:27.081 [2024-11-20 14:29:27.914722] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:27.081 [2024-11-20 14:29:27.914757] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:27.081 [2024-11-20 14:29:27.914781] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.081 [2024-11-20 14:29:27.914798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:27.081 request: 00:12:27.081 { 00:12:27.081 "name": "raid_bdev1", 00:12:27.081 "raid_level": "concat", 00:12:27.081 "base_bdevs": [ 00:12:27.081 "malloc1", 00:12:27.081 "malloc2", 00:12:27.081 "malloc3", 00:12:27.081 "malloc4" 00:12:27.081 ], 00:12:27.081 "strip_size_kb": 64, 00:12:27.081 "superblock": false, 00:12:27.081 "method": "bdev_raid_create", 00:12:27.081 "req_id": 1 00:12:27.081 } 00:12:27.081 Got JSON-RPC error response 00:12:27.081 response: 00:12:27.081 { 00:12:27.081 "code": -17, 00:12:27.081 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:27.081 } 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.081 [2024-11-20 14:29:27.983677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.081 [2024-11-20 14:29:27.983887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.081 [2024-11-20 14:29:27.983966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.081 [2024-11-20 14:29:27.984096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.081 [2024-11-20 14:29:27.987154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.081 [2024-11-20 14:29:27.987359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.081 [2024-11-20 14:29:27.987588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:27.081 [2024-11-20 14:29:27.987812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.081 pt1 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.081 14:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.082 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.082 14:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.082 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.082 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.082 "name": "raid_bdev1", 00:12:27.082 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:27.082 "strip_size_kb": 64, 00:12:27.082 "state": "configuring", 00:12:27.082 "raid_level": "concat", 00:12:27.082 "superblock": true, 00:12:27.082 "num_base_bdevs": 4, 00:12:27.082 "num_base_bdevs_discovered": 1, 00:12:27.082 "num_base_bdevs_operational": 4, 00:12:27.082 "base_bdevs_list": [ 00:12:27.082 { 00:12:27.082 "name": "pt1", 00:12:27.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.082 "is_configured": true, 00:12:27.082 "data_offset": 2048, 00:12:27.082 "data_size": 63488 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "name": null, 00:12:27.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.082 "is_configured": false, 00:12:27.082 "data_offset": 2048, 00:12:27.082 "data_size": 63488 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "name": null, 00:12:27.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.082 "is_configured": false, 00:12:27.082 "data_offset": 2048, 00:12:27.082 "data_size": 63488 00:12:27.082 }, 00:12:27.082 { 00:12:27.082 "name": null, 00:12:27.082 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.082 "is_configured": false, 00:12:27.082 "data_offset": 2048, 00:12:27.082 "data_size": 63488 00:12:27.082 } 00:12:27.082 ] 00:12:27.082 }' 00:12:27.082 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.082 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.649 [2024-11-20 14:29:28.511885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.649 [2024-11-20 14:29:28.511986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.649 [2024-11-20 14:29:28.512018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:27.649 [2024-11-20 14:29:28.512039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.649 [2024-11-20 14:29:28.512672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.649 [2024-11-20 14:29:28.512715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.649 [2024-11-20 14:29:28.512825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.649 [2024-11-20 14:29:28.512867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.649 pt2 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.649 [2024-11-20 14:29:28.519854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.649 "name": "raid_bdev1", 00:12:27.649 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:27.649 "strip_size_kb": 64, 00:12:27.649 "state": "configuring", 00:12:27.649 "raid_level": "concat", 00:12:27.649 "superblock": true, 00:12:27.649 "num_base_bdevs": 4, 00:12:27.649 "num_base_bdevs_discovered": 1, 00:12:27.649 "num_base_bdevs_operational": 4, 00:12:27.649 "base_bdevs_list": [ 00:12:27.649 { 00:12:27.649 "name": "pt1", 00:12:27.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.649 "is_configured": true, 00:12:27.649 "data_offset": 2048, 00:12:27.649 "data_size": 63488 00:12:27.649 }, 00:12:27.649 { 00:12:27.649 "name": null, 00:12:27.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.649 "is_configured": false, 00:12:27.649 "data_offset": 0, 00:12:27.649 "data_size": 63488 00:12:27.649 }, 00:12:27.649 { 00:12:27.649 "name": null, 00:12:27.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.649 "is_configured": false, 00:12:27.649 "data_offset": 2048, 00:12:27.649 "data_size": 63488 00:12:27.649 }, 00:12:27.649 { 00:12:27.649 "name": null, 00:12:27.649 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:27.649 "is_configured": false, 00:12:27.649 "data_offset": 2048, 00:12:27.649 "data_size": 63488 00:12:27.649 } 00:12:27.649 ] 00:12:27.649 }' 00:12:27.649 14:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.650 14:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.217 [2024-11-20 14:29:29.056059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.217 [2024-11-20 14:29:29.056152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.217 [2024-11-20 14:29:29.056187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:28.217 [2024-11-20 14:29:29.056204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.217 [2024-11-20 14:29:29.056814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.217 [2024-11-20 14:29:29.056841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.217 [2024-11-20 14:29:29.056959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.217 [2024-11-20 14:29:29.056996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.217 pt2 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.217 [2024-11-20 14:29:29.063989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:28.217 [2024-11-20 14:29:29.064054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.217 [2024-11-20 14:29:29.064084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:28.217 [2024-11-20 14:29:29.064099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.217 [2024-11-20 14:29:29.064588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.217 [2024-11-20 14:29:29.064640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:28.217 [2024-11-20 14:29:29.064734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:28.217 [2024-11-20 14:29:29.064773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:28.217 pt3 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.217 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.217 [2024-11-20 14:29:29.071960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:28.217 [2024-11-20 14:29:29.072015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.217 [2024-11-20 14:29:29.072047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:28.217 [2024-11-20 14:29:29.072063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.217 [2024-11-20 14:29:29.072533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.217 [2024-11-20 14:29:29.072569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:28.217 [2024-11-20 14:29:29.072679] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:28.217 [2024-11-20 14:29:29.072716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:28.217 [2024-11-20 14:29:29.072908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:28.217 [2024-11-20 14:29:29.072924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:28.217 [2024-11-20 14:29:29.073227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:28.217 [2024-11-20 14:29:29.073419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:28.218 [2024-11-20 14:29:29.073442] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:28.218 [2024-11-20 14:29:29.073605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.218 pt4 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.218 "name": "raid_bdev1", 00:12:28.218 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:28.218 "strip_size_kb": 64, 00:12:28.218 "state": "online", 00:12:28.218 "raid_level": "concat", 00:12:28.218 "superblock": true, 00:12:28.218 "num_base_bdevs": 4, 00:12:28.218 "num_base_bdevs_discovered": 4, 00:12:28.218 "num_base_bdevs_operational": 4, 00:12:28.218 "base_bdevs_list": [ 00:12:28.218 { 00:12:28.218 "name": "pt1", 00:12:28.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.218 "is_configured": true, 00:12:28.218 "data_offset": 2048, 00:12:28.218 "data_size": 63488 00:12:28.218 }, 00:12:28.218 { 00:12:28.218 "name": "pt2", 00:12:28.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.218 "is_configured": true, 00:12:28.218 "data_offset": 2048, 00:12:28.218 "data_size": 63488 00:12:28.218 }, 00:12:28.218 { 00:12:28.218 "name": "pt3", 00:12:28.218 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.218 "is_configured": true, 00:12:28.218 "data_offset": 2048, 00:12:28.218 "data_size": 63488 00:12:28.218 }, 00:12:28.218 { 00:12:28.218 "name": "pt4", 00:12:28.218 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.218 "is_configured": true, 00:12:28.218 "data_offset": 2048, 00:12:28.218 "data_size": 63488 00:12:28.218 } 00:12:28.218 ] 00:12:28.218 }' 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.218 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.784 [2024-11-20 14:29:29.616648] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.784 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:28.784 "name": "raid_bdev1", 00:12:28.784 "aliases": [ 00:12:28.784 "a8634aba-ccfb-4ffd-a05a-49a696ce7d70" 00:12:28.784 ], 00:12:28.784 "product_name": "Raid Volume", 00:12:28.784 "block_size": 512, 00:12:28.784 "num_blocks": 253952, 00:12:28.784 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:28.784 "assigned_rate_limits": { 00:12:28.784 "rw_ios_per_sec": 0, 00:12:28.784 "rw_mbytes_per_sec": 0, 00:12:28.784 "r_mbytes_per_sec": 0, 00:12:28.784 "w_mbytes_per_sec": 0 00:12:28.784 }, 00:12:28.784 "claimed": false, 00:12:28.784 "zoned": false, 00:12:28.784 "supported_io_types": { 00:12:28.784 "read": true, 00:12:28.784 "write": true, 00:12:28.784 "unmap": true, 00:12:28.784 "flush": true, 00:12:28.784 "reset": true, 00:12:28.784 "nvme_admin": false, 00:12:28.784 "nvme_io": false, 00:12:28.784 "nvme_io_md": false, 00:12:28.784 "write_zeroes": true, 00:12:28.784 "zcopy": false, 00:12:28.784 "get_zone_info": false, 00:12:28.784 "zone_management": false, 00:12:28.784 "zone_append": false, 00:12:28.784 "compare": false, 00:12:28.784 "compare_and_write": false, 00:12:28.784 "abort": false, 00:12:28.784 "seek_hole": false, 00:12:28.784 "seek_data": false, 00:12:28.784 "copy": false, 00:12:28.784 "nvme_iov_md": false 00:12:28.784 }, 00:12:28.784 "memory_domains": [ 00:12:28.784 { 00:12:28.784 "dma_device_id": "system", 00:12:28.784 "dma_device_type": 1 00:12:28.784 }, 00:12:28.784 { 00:12:28.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.784 "dma_device_type": 2 00:12:28.784 }, 00:12:28.784 { 00:12:28.784 "dma_device_id": "system", 00:12:28.784 "dma_device_type": 1 00:12:28.784 }, 00:12:28.784 { 00:12:28.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.784 "dma_device_type": 2 00:12:28.785 }, 00:12:28.785 { 00:12:28.785 "dma_device_id": "system", 00:12:28.785 "dma_device_type": 1 00:12:28.785 }, 00:12:28.785 { 00:12:28.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.785 "dma_device_type": 2 00:12:28.785 }, 00:12:28.785 { 00:12:28.785 "dma_device_id": "system", 00:12:28.785 "dma_device_type": 1 00:12:28.785 }, 00:12:28.785 { 00:12:28.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.785 "dma_device_type": 2 00:12:28.785 } 00:12:28.785 ], 00:12:28.785 "driver_specific": { 00:12:28.785 "raid": { 00:12:28.785 "uuid": "a8634aba-ccfb-4ffd-a05a-49a696ce7d70", 00:12:28.785 "strip_size_kb": 64, 00:12:28.785 "state": "online", 00:12:28.785 "raid_level": "concat", 00:12:28.785 "superblock": true, 00:12:28.785 "num_base_bdevs": 4, 00:12:28.785 "num_base_bdevs_discovered": 4, 00:12:28.785 "num_base_bdevs_operational": 4, 00:12:28.785 "base_bdevs_list": [ 00:12:28.785 { 00:12:28.785 "name": "pt1", 00:12:28.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.785 "is_configured": true, 00:12:28.785 "data_offset": 2048, 00:12:28.785 "data_size": 63488 00:12:28.785 }, 00:12:28.785 { 00:12:28.785 "name": "pt2", 00:12:28.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.785 "is_configured": true, 00:12:28.785 "data_offset": 2048, 00:12:28.785 "data_size": 63488 00:12:28.785 }, 00:12:28.785 { 00:12:28.785 "name": "pt3", 00:12:28.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.785 "is_configured": true, 00:12:28.785 "data_offset": 2048, 00:12:28.785 "data_size": 63488 00:12:28.785 }, 00:12:28.785 { 00:12:28.785 "name": "pt4", 00:12:28.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:28.785 "is_configured": true, 00:12:28.785 "data_offset": 2048, 00:12:28.785 "data_size": 63488 00:12:28.785 } 00:12:28.785 ] 00:12:28.785 } 00:12:28.785 } 00:12:28.785 }' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:28.785 pt2 00:12:28.785 pt3 00:12:28.785 pt4' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.785 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.044 14:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:29.044 [2024-11-20 14:29:30.004706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a8634aba-ccfb-4ffd-a05a-49a696ce7d70 '!=' a8634aba-ccfb-4ffd-a05a-49a696ce7d70 ']' 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72842 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72842 ']' 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72842 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72842 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.044 killing process with pid 72842 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72842' 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72842 00:12:29.044 [2024-11-20 14:29:30.083666] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.044 14:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72842 00:12:29.044 [2024-11-20 14:29:30.083827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.044 [2024-11-20 14:29:30.083972] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.044 [2024-11-20 14:29:30.083994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:29.610 [2024-11-20 14:29:30.452328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.632 14:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:30.632 00:12:30.632 real 0m6.149s 00:12:30.632 user 0m9.246s 00:12:30.632 sys 0m0.946s 00:12:30.632 14:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.632 ************************************ 00:12:30.632 END TEST raid_superblock_test 00:12:30.632 ************************************ 00:12:30.632 14:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.632 14:29:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:30.632 14:29:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:30.632 14:29:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.632 14:29:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.632 ************************************ 00:12:30.632 START TEST raid_read_error_test 00:12:30.632 ************************************ 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gdOI4e169N 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73112 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73112 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73112 ']' 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.632 14:29:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.891 [2024-11-20 14:29:31.709780] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:12:30.891 [2024-11-20 14:29:31.709993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73112 ] 00:12:30.891 [2024-11-20 14:29:31.905222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.150 [2024-11-20 14:29:32.066465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.409 [2024-11-20 14:29:32.315243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.409 [2024-11-20 14:29:32.315336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.667 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.667 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:31.667 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.667 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:31.667 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.667 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 BaseBdev1_malloc 00:12:31.926 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:31.926 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.926 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.926 true 00:12:31.926 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.926 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 [2024-11-20 14:29:32.759909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:31.927 [2024-11-20 14:29:32.759985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.927 [2024-11-20 14:29:32.760015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:31.927 [2024-11-20 14:29:32.760035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.927 [2024-11-20 14:29:32.762870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.927 [2024-11-20 14:29:32.762923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:31.927 BaseBdev1 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 BaseBdev2_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 true 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 [2024-11-20 14:29:32.820941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:31.927 [2024-11-20 14:29:32.821017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.927 [2024-11-20 14:29:32.821044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:31.927 [2024-11-20 14:29:32.821061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.927 [2024-11-20 14:29:32.823975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.927 [2024-11-20 14:29:32.824027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:31.927 BaseBdev2 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 BaseBdev3_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 true 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 [2024-11-20 14:29:32.892800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:31.927 [2024-11-20 14:29:32.892874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.927 [2024-11-20 14:29:32.892905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:31.927 [2024-11-20 14:29:32.892928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.927 [2024-11-20 14:29:32.895888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.927 [2024-11-20 14:29:32.895942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:31.927 BaseBdev3 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 BaseBdev4_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 true 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 [2024-11-20 14:29:32.953608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:31.927 [2024-11-20 14:29:32.953695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.927 [2024-11-20 14:29:32.953725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:31.927 [2024-11-20 14:29:32.953742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.927 [2024-11-20 14:29:32.956617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.927 [2024-11-20 14:29:32.956690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:31.927 BaseBdev4 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.927 [2024-11-20 14:29:32.961712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.927 [2024-11-20 14:29:32.964213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.927 [2024-11-20 14:29:32.964328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.927 [2024-11-20 14:29:32.964431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:31.927 [2024-11-20 14:29:32.964793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:31.927 [2024-11-20 14:29:32.964830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:31.927 [2024-11-20 14:29:32.965162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:31.927 [2024-11-20 14:29:32.965393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:31.927 [2024-11-20 14:29:32.965432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:31.927 [2024-11-20 14:29:32.965711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.927 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.187 14:29:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.187 14:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.187 "name": "raid_bdev1", 00:12:32.187 "uuid": "2d3476f6-d178-40b2-bfff-76b0e23f6d7c", 00:12:32.187 "strip_size_kb": 64, 00:12:32.187 "state": "online", 00:12:32.187 "raid_level": "concat", 00:12:32.187 "superblock": true, 00:12:32.187 "num_base_bdevs": 4, 00:12:32.187 "num_base_bdevs_discovered": 4, 00:12:32.187 "num_base_bdevs_operational": 4, 00:12:32.187 "base_bdevs_list": [ 00:12:32.187 { 00:12:32.187 "name": "BaseBdev1", 00:12:32.187 "uuid": "927acbc4-ae37-565b-8a40-af5fbb353586", 00:12:32.187 "is_configured": true, 00:12:32.187 "data_offset": 2048, 00:12:32.187 "data_size": 63488 00:12:32.187 }, 00:12:32.187 { 00:12:32.187 "name": "BaseBdev2", 00:12:32.187 "uuid": "89e91753-2efa-5f9b-b336-fe3bf3703468", 00:12:32.187 "is_configured": true, 00:12:32.187 "data_offset": 2048, 00:12:32.187 "data_size": 63488 00:12:32.187 }, 00:12:32.187 { 00:12:32.187 "name": "BaseBdev3", 00:12:32.187 "uuid": "d48f6eaf-2203-516d-b2a9-6b572bf40c8c", 00:12:32.187 "is_configured": true, 00:12:32.187 "data_offset": 2048, 00:12:32.187 "data_size": 63488 00:12:32.187 }, 00:12:32.187 { 00:12:32.187 "name": "BaseBdev4", 00:12:32.187 "uuid": "f6320028-7d54-50df-92a4-c5d272cf7ba5", 00:12:32.187 "is_configured": true, 00:12:32.187 "data_offset": 2048, 00:12:32.187 "data_size": 63488 00:12:32.187 } 00:12:32.187 ] 00:12:32.187 }' 00:12:32.187 14:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.187 14:29:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.754 14:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:32.754 14:29:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:32.754 [2024-11-20 14:29:33.639397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.690 "name": "raid_bdev1", 00:12:33.690 "uuid": "2d3476f6-d178-40b2-bfff-76b0e23f6d7c", 00:12:33.690 "strip_size_kb": 64, 00:12:33.690 "state": "online", 00:12:33.690 "raid_level": "concat", 00:12:33.690 "superblock": true, 00:12:33.690 "num_base_bdevs": 4, 00:12:33.690 "num_base_bdevs_discovered": 4, 00:12:33.690 "num_base_bdevs_operational": 4, 00:12:33.690 "base_bdevs_list": [ 00:12:33.690 { 00:12:33.690 "name": "BaseBdev1", 00:12:33.690 "uuid": "927acbc4-ae37-565b-8a40-af5fbb353586", 00:12:33.690 "is_configured": true, 00:12:33.690 "data_offset": 2048, 00:12:33.690 "data_size": 63488 00:12:33.690 }, 00:12:33.690 { 00:12:33.690 "name": "BaseBdev2", 00:12:33.690 "uuid": "89e91753-2efa-5f9b-b336-fe3bf3703468", 00:12:33.690 "is_configured": true, 00:12:33.690 "data_offset": 2048, 00:12:33.690 "data_size": 63488 00:12:33.690 }, 00:12:33.690 { 00:12:33.690 "name": "BaseBdev3", 00:12:33.690 "uuid": "d48f6eaf-2203-516d-b2a9-6b572bf40c8c", 00:12:33.690 "is_configured": true, 00:12:33.690 "data_offset": 2048, 00:12:33.690 "data_size": 63488 00:12:33.690 }, 00:12:33.690 { 00:12:33.690 "name": "BaseBdev4", 00:12:33.690 "uuid": "f6320028-7d54-50df-92a4-c5d272cf7ba5", 00:12:33.690 "is_configured": true, 00:12:33.690 "data_offset": 2048, 00:12:33.690 "data_size": 63488 00:12:33.690 } 00:12:33.690 ] 00:12:33.690 }' 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.690 14:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.257 [2024-11-20 14:29:35.034703] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.257 [2024-11-20 14:29:35.034748] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.257 [2024-11-20 14:29:35.038095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.257 [2024-11-20 14:29:35.038181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.257 [2024-11-20 14:29:35.038246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.257 [2024-11-20 14:29:35.038269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:34.257 { 00:12:34.257 "results": [ 00:12:34.257 { 00:12:34.257 "job": "raid_bdev1", 00:12:34.257 "core_mask": "0x1", 00:12:34.257 "workload": "randrw", 00:12:34.257 "percentage": 50, 00:12:34.257 "status": "finished", 00:12:34.257 "queue_depth": 1, 00:12:34.257 "io_size": 131072, 00:12:34.257 "runtime": 1.392706, 00:12:34.257 "iops": 10335.275356033506, 00:12:34.257 "mibps": 1291.9094195041882, 00:12:34.257 "io_failed": 1, 00:12:34.257 "io_timeout": 0, 00:12:34.257 "avg_latency_us": 135.20922643594682, 00:12:34.257 "min_latency_us": 43.75272727272727, 00:12:34.257 "max_latency_us": 1846.9236363636364 00:12:34.257 } 00:12:34.257 ], 00:12:34.257 "core_count": 1 00:12:34.257 } 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73112 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73112 ']' 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73112 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73112 00:12:34.257 killing process with pid 73112 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.257 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73112' 00:12:34.258 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73112 00:12:34.258 14:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73112 00:12:34.258 [2024-11-20 14:29:35.077048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.516 [2024-11-20 14:29:35.374335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gdOI4e169N 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:35.896 ************************************ 00:12:35.896 END TEST raid_read_error_test 00:12:35.896 ************************************ 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:35.896 00:12:35.896 real 0m4.942s 00:12:35.896 user 0m6.060s 00:12:35.896 sys 0m0.659s 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.896 14:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 14:29:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:35.896 14:29:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:35.896 14:29:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.896 14:29:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.896 ************************************ 00:12:35.896 START TEST raid_write_error_test 00:12:35.896 ************************************ 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.896 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p0xJUwRPBm 00:12:35.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73258 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73258 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73258 ']' 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.897 14:29:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.897 [2024-11-20 14:29:36.709717] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:12:35.897 [2024-11-20 14:29:36.710146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73258 ] 00:12:35.897 [2024-11-20 14:29:36.905756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.155 [2024-11-20 14:29:37.062497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.413 [2024-11-20 14:29:37.312242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.413 [2024-11-20 14:29:37.312584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 BaseBdev1_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 true 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 [2024-11-20 14:29:37.826752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:36.981 [2024-11-20 14:29:37.826824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.981 [2024-11-20 14:29:37.826855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:36.981 [2024-11-20 14:29:37.826873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.981 [2024-11-20 14:29:37.829713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.981 [2024-11-20 14:29:37.829914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.981 BaseBdev1 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 BaseBdev2_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 true 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 [2024-11-20 14:29:37.894977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:36.981 [2024-11-20 14:29:37.895053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.981 [2024-11-20 14:29:37.895082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:36.981 [2024-11-20 14:29:37.895100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.981 [2024-11-20 14:29:37.898012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.981 [2024-11-20 14:29:37.898068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.981 BaseBdev2 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 BaseBdev3_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 true 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.981 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.981 [2024-11-20 14:29:37.971662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:36.981 [2024-11-20 14:29:37.971733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.981 [2024-11-20 14:29:37.971762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:36.981 [2024-11-20 14:29:37.971779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.981 [2024-11-20 14:29:37.974673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.981 [2024-11-20 14:29:37.974721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:36.981 BaseBdev3 00:12:36.982 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.982 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:36.982 14:29:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:36.982 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.982 14:29:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.982 BaseBdev4_malloc 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.982 true 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.982 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.982 [2024-11-20 14:29:38.033710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:36.982 [2024-11-20 14:29:38.033787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.982 [2024-11-20 14:29:38.033818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:36.982 [2024-11-20 14:29:38.033847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.241 [2024-11-20 14:29:38.036721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.241 [2024-11-20 14:29:38.036894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:37.241 BaseBdev4 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.241 [2024-11-20 14:29:38.041898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.241 [2024-11-20 14:29:38.044363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.241 [2024-11-20 14:29:38.044593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.241 [2024-11-20 14:29:38.044740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:37.241 [2024-11-20 14:29:38.045044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:37.241 [2024-11-20 14:29:38.045066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:37.241 [2024-11-20 14:29:38.045395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:37.241 [2024-11-20 14:29:38.045610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:37.241 [2024-11-20 14:29:38.045653] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:37.241 [2024-11-20 14:29:38.045926] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.241 "name": "raid_bdev1", 00:12:37.241 "uuid": "d1b814a2-9d5c-4bee-a0a8-df9d51411970", 00:12:37.241 "strip_size_kb": 64, 00:12:37.241 "state": "online", 00:12:37.241 "raid_level": "concat", 00:12:37.241 "superblock": true, 00:12:37.241 "num_base_bdevs": 4, 00:12:37.241 "num_base_bdevs_discovered": 4, 00:12:37.241 "num_base_bdevs_operational": 4, 00:12:37.241 "base_bdevs_list": [ 00:12:37.241 { 00:12:37.241 "name": "BaseBdev1", 00:12:37.241 "uuid": "906c1217-3537-5f88-94c5-45c2891a3479", 00:12:37.241 "is_configured": true, 00:12:37.241 "data_offset": 2048, 00:12:37.241 "data_size": 63488 00:12:37.241 }, 00:12:37.241 { 00:12:37.241 "name": "BaseBdev2", 00:12:37.241 "uuid": "9670f7d3-20dc-53b0-b508-fa9e973f2f72", 00:12:37.241 "is_configured": true, 00:12:37.241 "data_offset": 2048, 00:12:37.241 "data_size": 63488 00:12:37.241 }, 00:12:37.241 { 00:12:37.241 "name": "BaseBdev3", 00:12:37.241 "uuid": "4d046f3d-f15a-551a-ae0b-6c71fcdd33da", 00:12:37.241 "is_configured": true, 00:12:37.241 "data_offset": 2048, 00:12:37.241 "data_size": 63488 00:12:37.241 }, 00:12:37.241 { 00:12:37.241 "name": "BaseBdev4", 00:12:37.241 "uuid": "a39ff736-3182-5c12-9d7f-fc950541eba3", 00:12:37.241 "is_configured": true, 00:12:37.241 "data_offset": 2048, 00:12:37.241 "data_size": 63488 00:12:37.241 } 00:12:37.241 ] 00:12:37.241 }' 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.241 14:29:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.808 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:37.809 14:29:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:37.809 [2024-11-20 14:29:38.759594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.742 "name": "raid_bdev1", 00:12:38.742 "uuid": "d1b814a2-9d5c-4bee-a0a8-df9d51411970", 00:12:38.742 "strip_size_kb": 64, 00:12:38.742 "state": "online", 00:12:38.742 "raid_level": "concat", 00:12:38.742 "superblock": true, 00:12:38.742 "num_base_bdevs": 4, 00:12:38.742 "num_base_bdevs_discovered": 4, 00:12:38.742 "num_base_bdevs_operational": 4, 00:12:38.742 "base_bdevs_list": [ 00:12:38.742 { 00:12:38.742 "name": "BaseBdev1", 00:12:38.742 "uuid": "906c1217-3537-5f88-94c5-45c2891a3479", 00:12:38.742 "is_configured": true, 00:12:38.742 "data_offset": 2048, 00:12:38.742 "data_size": 63488 00:12:38.742 }, 00:12:38.742 { 00:12:38.742 "name": "BaseBdev2", 00:12:38.742 "uuid": "9670f7d3-20dc-53b0-b508-fa9e973f2f72", 00:12:38.742 "is_configured": true, 00:12:38.742 "data_offset": 2048, 00:12:38.742 "data_size": 63488 00:12:38.742 }, 00:12:38.742 { 00:12:38.742 "name": "BaseBdev3", 00:12:38.742 "uuid": "4d046f3d-f15a-551a-ae0b-6c71fcdd33da", 00:12:38.742 "is_configured": true, 00:12:38.742 "data_offset": 2048, 00:12:38.742 "data_size": 63488 00:12:38.742 }, 00:12:38.742 { 00:12:38.742 "name": "BaseBdev4", 00:12:38.742 "uuid": "a39ff736-3182-5c12-9d7f-fc950541eba3", 00:12:38.742 "is_configured": true, 00:12:38.742 "data_offset": 2048, 00:12:38.742 "data_size": 63488 00:12:38.742 } 00:12:38.742 ] 00:12:38.742 }' 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.742 14:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.309 [2024-11-20 14:29:40.129504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.309 [2024-11-20 14:29:40.129573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.309 [2024-11-20 14:29:40.132948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.309 [2024-11-20 14:29:40.133031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.309 [2024-11-20 14:29:40.133097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.309 [2024-11-20 14:29:40.133121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:39.309 { 00:12:39.309 "results": [ 00:12:39.309 { 00:12:39.309 "job": "raid_bdev1", 00:12:39.309 "core_mask": "0x1", 00:12:39.309 "workload": "randrw", 00:12:39.309 "percentage": 50, 00:12:39.309 "status": "finished", 00:12:39.309 "queue_depth": 1, 00:12:39.309 "io_size": 131072, 00:12:39.309 "runtime": 1.367076, 00:12:39.309 "iops": 9937.999057843163, 00:12:39.309 "mibps": 1242.2498822303953, 00:12:39.309 "io_failed": 1, 00:12:39.309 "io_timeout": 0, 00:12:39.309 "avg_latency_us": 140.56284496544157, 00:12:39.309 "min_latency_us": 42.82181818181818, 00:12:39.309 "max_latency_us": 1839.4763636363637 00:12:39.309 } 00:12:39.309 ], 00:12:39.309 "core_count": 1 00:12:39.309 } 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73258 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73258 ']' 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73258 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.309 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73258 00:12:39.310 killing process with pid 73258 00:12:39.310 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.310 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.310 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73258' 00:12:39.310 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73258 00:12:39.310 [2024-11-20 14:29:40.170862] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.310 14:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73258 00:12:39.569 [2024-11-20 14:29:40.479470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.962 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:40.962 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p0xJUwRPBm 00:12:40.962 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:40.962 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:40.963 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:40.963 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:40.963 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:40.963 14:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:40.963 00:12:40.963 real 0m5.099s 00:12:40.963 user 0m6.365s 00:12:40.963 sys 0m0.638s 00:12:40.963 14:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.963 ************************************ 00:12:40.963 END TEST raid_write_error_test 00:12:40.963 ************************************ 00:12:40.963 14:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.963 14:29:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:40.963 14:29:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:40.963 14:29:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:40.963 14:29:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.963 14:29:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.963 ************************************ 00:12:40.963 START TEST raid_state_function_test 00:12:40.963 ************************************ 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:40.963 Process raid pid: 73407 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73407 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73407' 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73407 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73407 ']' 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.963 14:29:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.963 [2024-11-20 14:29:41.852315] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:12:40.963 [2024-11-20 14:29:41.852510] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.264 [2024-11-20 14:29:42.040969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.264 [2024-11-20 14:29:42.185440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.523 [2024-11-20 14:29:42.420782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.523 [2024-11-20 14:29:42.422323] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.090 [2024-11-20 14:29:42.875476] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.090 [2024-11-20 14:29:42.876037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.090 [2024-11-20 14:29:42.876073] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.090 [2024-11-20 14:29:42.876308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.090 [2024-11-20 14:29:42.876337] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.090 [2024-11-20 14:29:42.876466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.090 [2024-11-20 14:29:42.876485] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.090 [2024-11-20 14:29:42.876506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.090 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.090 "name": "Existed_Raid", 00:12:42.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.090 "strip_size_kb": 0, 00:12:42.090 "state": "configuring", 00:12:42.090 "raid_level": "raid1", 00:12:42.090 "superblock": false, 00:12:42.090 "num_base_bdevs": 4, 00:12:42.090 "num_base_bdevs_discovered": 0, 00:12:42.090 "num_base_bdevs_operational": 4, 00:12:42.090 "base_bdevs_list": [ 00:12:42.090 { 00:12:42.090 "name": "BaseBdev1", 00:12:42.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.090 "is_configured": false, 00:12:42.090 "data_offset": 0, 00:12:42.090 "data_size": 0 00:12:42.090 }, 00:12:42.090 { 00:12:42.090 "name": "BaseBdev2", 00:12:42.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.090 "is_configured": false, 00:12:42.090 "data_offset": 0, 00:12:42.090 "data_size": 0 00:12:42.090 }, 00:12:42.090 { 00:12:42.090 "name": "BaseBdev3", 00:12:42.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.090 "is_configured": false, 00:12:42.090 "data_offset": 0, 00:12:42.090 "data_size": 0 00:12:42.090 }, 00:12:42.090 { 00:12:42.090 "name": "BaseBdev4", 00:12:42.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.090 "is_configured": false, 00:12:42.091 "data_offset": 0, 00:12:42.091 "data_size": 0 00:12:42.091 } 00:12:42.091 ] 00:12:42.091 }' 00:12:42.091 14:29:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.091 14:29:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.349 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.349 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.349 [2024-11-20 14:29:43.387588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.349 [2024-11-20 14:29:43.387661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:42.349 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.349 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:42.349 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.349 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.349 [2024-11-20 14:29:43.399552] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.349 [2024-11-20 14:29:43.399842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.349 [2024-11-20 14:29:43.399867] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.349 [2024-11-20 14:29:43.399991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.349 [2024-11-20 14:29:43.400011] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.349 [2024-11-20 14:29:43.400099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.349 [2024-11-20 14:29:43.400122] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.349 [2024-11-20 14:29:43.400450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.607 [2024-11-20 14:29:43.445406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.607 BaseBdev1 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.607 [ 00:12:42.607 { 00:12:42.607 "name": "BaseBdev1", 00:12:42.607 "aliases": [ 00:12:42.607 "477d1b40-b683-4184-84f4-03d632c8d6c0" 00:12:42.607 ], 00:12:42.607 "product_name": "Malloc disk", 00:12:42.607 "block_size": 512, 00:12:42.607 "num_blocks": 65536, 00:12:42.607 "uuid": "477d1b40-b683-4184-84f4-03d632c8d6c0", 00:12:42.607 "assigned_rate_limits": { 00:12:42.607 "rw_ios_per_sec": 0, 00:12:42.607 "rw_mbytes_per_sec": 0, 00:12:42.607 "r_mbytes_per_sec": 0, 00:12:42.607 "w_mbytes_per_sec": 0 00:12:42.607 }, 00:12:42.607 "claimed": true, 00:12:42.607 "claim_type": "exclusive_write", 00:12:42.607 "zoned": false, 00:12:42.607 "supported_io_types": { 00:12:42.607 "read": true, 00:12:42.607 "write": true, 00:12:42.607 "unmap": true, 00:12:42.607 "flush": true, 00:12:42.607 "reset": true, 00:12:42.607 "nvme_admin": false, 00:12:42.607 "nvme_io": false, 00:12:42.607 "nvme_io_md": false, 00:12:42.607 "write_zeroes": true, 00:12:42.607 "zcopy": true, 00:12:42.607 "get_zone_info": false, 00:12:42.607 "zone_management": false, 00:12:42.607 "zone_append": false, 00:12:42.607 "compare": false, 00:12:42.607 "compare_and_write": false, 00:12:42.607 "abort": true, 00:12:42.607 "seek_hole": false, 00:12:42.607 "seek_data": false, 00:12:42.607 "copy": true, 00:12:42.607 "nvme_iov_md": false 00:12:42.607 }, 00:12:42.607 "memory_domains": [ 00:12:42.607 { 00:12:42.607 "dma_device_id": "system", 00:12:42.607 "dma_device_type": 1 00:12:42.607 }, 00:12:42.607 { 00:12:42.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.607 "dma_device_type": 2 00:12:42.607 } 00:12:42.607 ], 00:12:42.607 "driver_specific": {} 00:12:42.607 } 00:12:42.607 ] 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.607 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.608 "name": "Existed_Raid", 00:12:42.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.608 "strip_size_kb": 0, 00:12:42.608 "state": "configuring", 00:12:42.608 "raid_level": "raid1", 00:12:42.608 "superblock": false, 00:12:42.608 "num_base_bdevs": 4, 00:12:42.608 "num_base_bdevs_discovered": 1, 00:12:42.608 "num_base_bdevs_operational": 4, 00:12:42.608 "base_bdevs_list": [ 00:12:42.608 { 00:12:42.608 "name": "BaseBdev1", 00:12:42.608 "uuid": "477d1b40-b683-4184-84f4-03d632c8d6c0", 00:12:42.608 "is_configured": true, 00:12:42.608 "data_offset": 0, 00:12:42.608 "data_size": 65536 00:12:42.608 }, 00:12:42.608 { 00:12:42.608 "name": "BaseBdev2", 00:12:42.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.608 "is_configured": false, 00:12:42.608 "data_offset": 0, 00:12:42.608 "data_size": 0 00:12:42.608 }, 00:12:42.608 { 00:12:42.608 "name": "BaseBdev3", 00:12:42.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.608 "is_configured": false, 00:12:42.608 "data_offset": 0, 00:12:42.608 "data_size": 0 00:12:42.608 }, 00:12:42.608 { 00:12:42.608 "name": "BaseBdev4", 00:12:42.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.608 "is_configured": false, 00:12:42.608 "data_offset": 0, 00:12:42.608 "data_size": 0 00:12:42.608 } 00:12:42.608 ] 00:12:42.608 }' 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.608 14:29:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.175 [2024-11-20 14:29:44.025681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.175 [2024-11-20 14:29:44.025752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.175 [2024-11-20 14:29:44.033717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.175 [2024-11-20 14:29:44.036423] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.175 [2024-11-20 14:29:44.037080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.175 [2024-11-20 14:29:44.037234] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.175 [2024-11-20 14:29:44.037393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.175 [2024-11-20 14:29:44.037590] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.175 [2024-11-20 14:29:44.037759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.175 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.176 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.176 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.176 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.176 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.176 "name": "Existed_Raid", 00:12:43.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.176 "strip_size_kb": 0, 00:12:43.176 "state": "configuring", 00:12:43.176 "raid_level": "raid1", 00:12:43.176 "superblock": false, 00:12:43.176 "num_base_bdevs": 4, 00:12:43.176 "num_base_bdevs_discovered": 1, 00:12:43.176 "num_base_bdevs_operational": 4, 00:12:43.176 "base_bdevs_list": [ 00:12:43.176 { 00:12:43.176 "name": "BaseBdev1", 00:12:43.176 "uuid": "477d1b40-b683-4184-84f4-03d632c8d6c0", 00:12:43.176 "is_configured": true, 00:12:43.176 "data_offset": 0, 00:12:43.176 "data_size": 65536 00:12:43.176 }, 00:12:43.176 { 00:12:43.176 "name": "BaseBdev2", 00:12:43.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.176 "is_configured": false, 00:12:43.176 "data_offset": 0, 00:12:43.176 "data_size": 0 00:12:43.176 }, 00:12:43.176 { 00:12:43.176 "name": "BaseBdev3", 00:12:43.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.176 "is_configured": false, 00:12:43.176 "data_offset": 0, 00:12:43.176 "data_size": 0 00:12:43.176 }, 00:12:43.176 { 00:12:43.176 "name": "BaseBdev4", 00:12:43.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.176 "is_configured": false, 00:12:43.176 "data_offset": 0, 00:12:43.176 "data_size": 0 00:12:43.176 } 00:12:43.176 ] 00:12:43.176 }' 00:12:43.176 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.176 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.743 [2024-11-20 14:29:44.589178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.743 BaseBdev2 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.743 [ 00:12:43.743 { 00:12:43.743 "name": "BaseBdev2", 00:12:43.743 "aliases": [ 00:12:43.743 "47e49658-a1a9-4ba8-92d0-007c656997bc" 00:12:43.743 ], 00:12:43.743 "product_name": "Malloc disk", 00:12:43.743 "block_size": 512, 00:12:43.743 "num_blocks": 65536, 00:12:43.743 "uuid": "47e49658-a1a9-4ba8-92d0-007c656997bc", 00:12:43.743 "assigned_rate_limits": { 00:12:43.743 "rw_ios_per_sec": 0, 00:12:43.743 "rw_mbytes_per_sec": 0, 00:12:43.743 "r_mbytes_per_sec": 0, 00:12:43.743 "w_mbytes_per_sec": 0 00:12:43.743 }, 00:12:43.743 "claimed": true, 00:12:43.743 "claim_type": "exclusive_write", 00:12:43.743 "zoned": false, 00:12:43.743 "supported_io_types": { 00:12:43.743 "read": true, 00:12:43.743 "write": true, 00:12:43.743 "unmap": true, 00:12:43.743 "flush": true, 00:12:43.743 "reset": true, 00:12:43.743 "nvme_admin": false, 00:12:43.743 "nvme_io": false, 00:12:43.743 "nvme_io_md": false, 00:12:43.743 "write_zeroes": true, 00:12:43.743 "zcopy": true, 00:12:43.743 "get_zone_info": false, 00:12:43.743 "zone_management": false, 00:12:43.743 "zone_append": false, 00:12:43.743 "compare": false, 00:12:43.743 "compare_and_write": false, 00:12:43.743 "abort": true, 00:12:43.743 "seek_hole": false, 00:12:43.743 "seek_data": false, 00:12:43.743 "copy": true, 00:12:43.743 "nvme_iov_md": false 00:12:43.743 }, 00:12:43.743 "memory_domains": [ 00:12:43.743 { 00:12:43.743 "dma_device_id": "system", 00:12:43.743 "dma_device_type": 1 00:12:43.743 }, 00:12:43.743 { 00:12:43.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.743 "dma_device_type": 2 00:12:43.743 } 00:12:43.743 ], 00:12:43.743 "driver_specific": {} 00:12:43.743 } 00:12:43.743 ] 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.743 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.744 "name": "Existed_Raid", 00:12:43.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.744 "strip_size_kb": 0, 00:12:43.744 "state": "configuring", 00:12:43.744 "raid_level": "raid1", 00:12:43.744 "superblock": false, 00:12:43.744 "num_base_bdevs": 4, 00:12:43.744 "num_base_bdevs_discovered": 2, 00:12:43.744 "num_base_bdevs_operational": 4, 00:12:43.744 "base_bdevs_list": [ 00:12:43.744 { 00:12:43.744 "name": "BaseBdev1", 00:12:43.744 "uuid": "477d1b40-b683-4184-84f4-03d632c8d6c0", 00:12:43.744 "is_configured": true, 00:12:43.744 "data_offset": 0, 00:12:43.744 "data_size": 65536 00:12:43.744 }, 00:12:43.744 { 00:12:43.744 "name": "BaseBdev2", 00:12:43.744 "uuid": "47e49658-a1a9-4ba8-92d0-007c656997bc", 00:12:43.744 "is_configured": true, 00:12:43.744 "data_offset": 0, 00:12:43.744 "data_size": 65536 00:12:43.744 }, 00:12:43.744 { 00:12:43.744 "name": "BaseBdev3", 00:12:43.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.744 "is_configured": false, 00:12:43.744 "data_offset": 0, 00:12:43.744 "data_size": 0 00:12:43.744 }, 00:12:43.744 { 00:12:43.744 "name": "BaseBdev4", 00:12:43.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.744 "is_configured": false, 00:12:43.744 "data_offset": 0, 00:12:43.744 "data_size": 0 00:12:43.744 } 00:12:43.744 ] 00:12:43.744 }' 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.744 14:29:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.338 [2024-11-20 14:29:45.164744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.338 BaseBdev3 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.338 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.338 [ 00:12:44.338 { 00:12:44.338 "name": "BaseBdev3", 00:12:44.338 "aliases": [ 00:12:44.338 "3eda5f29-64cb-42b4-a0f7-acf781405722" 00:12:44.338 ], 00:12:44.338 "product_name": "Malloc disk", 00:12:44.338 "block_size": 512, 00:12:44.338 "num_blocks": 65536, 00:12:44.339 "uuid": "3eda5f29-64cb-42b4-a0f7-acf781405722", 00:12:44.339 "assigned_rate_limits": { 00:12:44.339 "rw_ios_per_sec": 0, 00:12:44.339 "rw_mbytes_per_sec": 0, 00:12:44.339 "r_mbytes_per_sec": 0, 00:12:44.339 "w_mbytes_per_sec": 0 00:12:44.339 }, 00:12:44.339 "claimed": true, 00:12:44.339 "claim_type": "exclusive_write", 00:12:44.339 "zoned": false, 00:12:44.339 "supported_io_types": { 00:12:44.339 "read": true, 00:12:44.339 "write": true, 00:12:44.339 "unmap": true, 00:12:44.339 "flush": true, 00:12:44.339 "reset": true, 00:12:44.339 "nvme_admin": false, 00:12:44.339 "nvme_io": false, 00:12:44.339 "nvme_io_md": false, 00:12:44.339 "write_zeroes": true, 00:12:44.339 "zcopy": true, 00:12:44.339 "get_zone_info": false, 00:12:44.339 "zone_management": false, 00:12:44.339 "zone_append": false, 00:12:44.339 "compare": false, 00:12:44.339 "compare_and_write": false, 00:12:44.339 "abort": true, 00:12:44.339 "seek_hole": false, 00:12:44.339 "seek_data": false, 00:12:44.339 "copy": true, 00:12:44.339 "nvme_iov_md": false 00:12:44.339 }, 00:12:44.339 "memory_domains": [ 00:12:44.339 { 00:12:44.339 "dma_device_id": "system", 00:12:44.339 "dma_device_type": 1 00:12:44.339 }, 00:12:44.339 { 00:12:44.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.339 "dma_device_type": 2 00:12:44.339 } 00:12:44.339 ], 00:12:44.339 "driver_specific": {} 00:12:44.339 } 00:12:44.339 ] 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.339 "name": "Existed_Raid", 00:12:44.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.339 "strip_size_kb": 0, 00:12:44.339 "state": "configuring", 00:12:44.339 "raid_level": "raid1", 00:12:44.339 "superblock": false, 00:12:44.339 "num_base_bdevs": 4, 00:12:44.339 "num_base_bdevs_discovered": 3, 00:12:44.339 "num_base_bdevs_operational": 4, 00:12:44.339 "base_bdevs_list": [ 00:12:44.339 { 00:12:44.339 "name": "BaseBdev1", 00:12:44.339 "uuid": "477d1b40-b683-4184-84f4-03d632c8d6c0", 00:12:44.339 "is_configured": true, 00:12:44.339 "data_offset": 0, 00:12:44.339 "data_size": 65536 00:12:44.339 }, 00:12:44.339 { 00:12:44.339 "name": "BaseBdev2", 00:12:44.339 "uuid": "47e49658-a1a9-4ba8-92d0-007c656997bc", 00:12:44.339 "is_configured": true, 00:12:44.339 "data_offset": 0, 00:12:44.339 "data_size": 65536 00:12:44.339 }, 00:12:44.339 { 00:12:44.339 "name": "BaseBdev3", 00:12:44.339 "uuid": "3eda5f29-64cb-42b4-a0f7-acf781405722", 00:12:44.339 "is_configured": true, 00:12:44.339 "data_offset": 0, 00:12:44.339 "data_size": 65536 00:12:44.339 }, 00:12:44.339 { 00:12:44.339 "name": "BaseBdev4", 00:12:44.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.339 "is_configured": false, 00:12:44.339 "data_offset": 0, 00:12:44.339 "data_size": 0 00:12:44.339 } 00:12:44.339 ] 00:12:44.339 }' 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.339 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.904 [2024-11-20 14:29:45.744161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.904 [2024-11-20 14:29:45.744392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.904 [2024-11-20 14:29:45.744448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:44.904 [2024-11-20 14:29:45.744939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:44.904 [2024-11-20 14:29:45.745309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.904 [2024-11-20 14:29:45.745340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:44.904 [2024-11-20 14:29:45.745709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.904 BaseBdev4 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.904 [ 00:12:44.904 { 00:12:44.904 "name": "BaseBdev4", 00:12:44.904 "aliases": [ 00:12:44.904 "1a1be2d0-e076-4783-849e-a32d0dede364" 00:12:44.904 ], 00:12:44.904 "product_name": "Malloc disk", 00:12:44.904 "block_size": 512, 00:12:44.904 "num_blocks": 65536, 00:12:44.904 "uuid": "1a1be2d0-e076-4783-849e-a32d0dede364", 00:12:44.904 "assigned_rate_limits": { 00:12:44.904 "rw_ios_per_sec": 0, 00:12:44.904 "rw_mbytes_per_sec": 0, 00:12:44.904 "r_mbytes_per_sec": 0, 00:12:44.904 "w_mbytes_per_sec": 0 00:12:44.904 }, 00:12:44.904 "claimed": true, 00:12:44.904 "claim_type": "exclusive_write", 00:12:44.904 "zoned": false, 00:12:44.904 "supported_io_types": { 00:12:44.904 "read": true, 00:12:44.904 "write": true, 00:12:44.904 "unmap": true, 00:12:44.904 "flush": true, 00:12:44.904 "reset": true, 00:12:44.904 "nvme_admin": false, 00:12:44.904 "nvme_io": false, 00:12:44.904 "nvme_io_md": false, 00:12:44.904 "write_zeroes": true, 00:12:44.904 "zcopy": true, 00:12:44.904 "get_zone_info": false, 00:12:44.904 "zone_management": false, 00:12:44.904 "zone_append": false, 00:12:44.904 "compare": false, 00:12:44.904 "compare_and_write": false, 00:12:44.904 "abort": true, 00:12:44.904 "seek_hole": false, 00:12:44.904 "seek_data": false, 00:12:44.904 "copy": true, 00:12:44.904 "nvme_iov_md": false 00:12:44.904 }, 00:12:44.904 "memory_domains": [ 00:12:44.904 { 00:12:44.904 "dma_device_id": "system", 00:12:44.904 "dma_device_type": 1 00:12:44.904 }, 00:12:44.904 { 00:12:44.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.904 "dma_device_type": 2 00:12:44.904 } 00:12:44.904 ], 00:12:44.904 "driver_specific": {} 00:12:44.904 } 00:12:44.904 ] 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:44.904 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.905 "name": "Existed_Raid", 00:12:44.905 "uuid": "d37b3b17-cf69-475a-ac0e-42ca0daa0067", 00:12:44.905 "strip_size_kb": 0, 00:12:44.905 "state": "online", 00:12:44.905 "raid_level": "raid1", 00:12:44.905 "superblock": false, 00:12:44.905 "num_base_bdevs": 4, 00:12:44.905 "num_base_bdevs_discovered": 4, 00:12:44.905 "num_base_bdevs_operational": 4, 00:12:44.905 "base_bdevs_list": [ 00:12:44.905 { 00:12:44.905 "name": "BaseBdev1", 00:12:44.905 "uuid": "477d1b40-b683-4184-84f4-03d632c8d6c0", 00:12:44.905 "is_configured": true, 00:12:44.905 "data_offset": 0, 00:12:44.905 "data_size": 65536 00:12:44.905 }, 00:12:44.905 { 00:12:44.905 "name": "BaseBdev2", 00:12:44.905 "uuid": "47e49658-a1a9-4ba8-92d0-007c656997bc", 00:12:44.905 "is_configured": true, 00:12:44.905 "data_offset": 0, 00:12:44.905 "data_size": 65536 00:12:44.905 }, 00:12:44.905 { 00:12:44.905 "name": "BaseBdev3", 00:12:44.905 "uuid": "3eda5f29-64cb-42b4-a0f7-acf781405722", 00:12:44.905 "is_configured": true, 00:12:44.905 "data_offset": 0, 00:12:44.905 "data_size": 65536 00:12:44.905 }, 00:12:44.905 { 00:12:44.905 "name": "BaseBdev4", 00:12:44.905 "uuid": "1a1be2d0-e076-4783-849e-a32d0dede364", 00:12:44.905 "is_configured": true, 00:12:44.905 "data_offset": 0, 00:12:44.905 "data_size": 65536 00:12:44.905 } 00:12:44.905 ] 00:12:44.905 }' 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.905 14:29:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.470 [2024-11-20 14:29:46.300856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.470 "name": "Existed_Raid", 00:12:45.470 "aliases": [ 00:12:45.470 "d37b3b17-cf69-475a-ac0e-42ca0daa0067" 00:12:45.470 ], 00:12:45.470 "product_name": "Raid Volume", 00:12:45.470 "block_size": 512, 00:12:45.470 "num_blocks": 65536, 00:12:45.470 "uuid": "d37b3b17-cf69-475a-ac0e-42ca0daa0067", 00:12:45.470 "assigned_rate_limits": { 00:12:45.470 "rw_ios_per_sec": 0, 00:12:45.470 "rw_mbytes_per_sec": 0, 00:12:45.470 "r_mbytes_per_sec": 0, 00:12:45.470 "w_mbytes_per_sec": 0 00:12:45.470 }, 00:12:45.470 "claimed": false, 00:12:45.470 "zoned": false, 00:12:45.470 "supported_io_types": { 00:12:45.470 "read": true, 00:12:45.470 "write": true, 00:12:45.470 "unmap": false, 00:12:45.470 "flush": false, 00:12:45.470 "reset": true, 00:12:45.470 "nvme_admin": false, 00:12:45.470 "nvme_io": false, 00:12:45.470 "nvme_io_md": false, 00:12:45.470 "write_zeroes": true, 00:12:45.470 "zcopy": false, 00:12:45.470 "get_zone_info": false, 00:12:45.470 "zone_management": false, 00:12:45.470 "zone_append": false, 00:12:45.470 "compare": false, 00:12:45.470 "compare_and_write": false, 00:12:45.470 "abort": false, 00:12:45.470 "seek_hole": false, 00:12:45.470 "seek_data": false, 00:12:45.470 "copy": false, 00:12:45.470 "nvme_iov_md": false 00:12:45.470 }, 00:12:45.470 "memory_domains": [ 00:12:45.470 { 00:12:45.470 "dma_device_id": "system", 00:12:45.470 "dma_device_type": 1 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.470 "dma_device_type": 2 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "dma_device_id": "system", 00:12:45.470 "dma_device_type": 1 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.470 "dma_device_type": 2 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "dma_device_id": "system", 00:12:45.470 "dma_device_type": 1 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.470 "dma_device_type": 2 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "dma_device_id": "system", 00:12:45.470 "dma_device_type": 1 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.470 "dma_device_type": 2 00:12:45.470 } 00:12:45.470 ], 00:12:45.470 "driver_specific": { 00:12:45.470 "raid": { 00:12:45.470 "uuid": "d37b3b17-cf69-475a-ac0e-42ca0daa0067", 00:12:45.470 "strip_size_kb": 0, 00:12:45.470 "state": "online", 00:12:45.470 "raid_level": "raid1", 00:12:45.470 "superblock": false, 00:12:45.470 "num_base_bdevs": 4, 00:12:45.470 "num_base_bdevs_discovered": 4, 00:12:45.470 "num_base_bdevs_operational": 4, 00:12:45.470 "base_bdevs_list": [ 00:12:45.470 { 00:12:45.470 "name": "BaseBdev1", 00:12:45.470 "uuid": "477d1b40-b683-4184-84f4-03d632c8d6c0", 00:12:45.470 "is_configured": true, 00:12:45.470 "data_offset": 0, 00:12:45.470 "data_size": 65536 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "name": "BaseBdev2", 00:12:45.470 "uuid": "47e49658-a1a9-4ba8-92d0-007c656997bc", 00:12:45.470 "is_configured": true, 00:12:45.470 "data_offset": 0, 00:12:45.470 "data_size": 65536 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "name": "BaseBdev3", 00:12:45.470 "uuid": "3eda5f29-64cb-42b4-a0f7-acf781405722", 00:12:45.470 "is_configured": true, 00:12:45.470 "data_offset": 0, 00:12:45.470 "data_size": 65536 00:12:45.470 }, 00:12:45.470 { 00:12:45.470 "name": "BaseBdev4", 00:12:45.470 "uuid": "1a1be2d0-e076-4783-849e-a32d0dede364", 00:12:45.470 "is_configured": true, 00:12:45.470 "data_offset": 0, 00:12:45.470 "data_size": 65536 00:12:45.470 } 00:12:45.470 ] 00:12:45.470 } 00:12:45.470 } 00:12:45.470 }' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:45.470 BaseBdev2 00:12:45.470 BaseBdev3 00:12:45.470 BaseBdev4' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.470 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.729 [2024-11-20 14:29:46.672572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.729 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.987 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.987 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.987 "name": "Existed_Raid", 00:12:45.987 "uuid": "d37b3b17-cf69-475a-ac0e-42ca0daa0067", 00:12:45.987 "strip_size_kb": 0, 00:12:45.987 "state": "online", 00:12:45.987 "raid_level": "raid1", 00:12:45.987 "superblock": false, 00:12:45.987 "num_base_bdevs": 4, 00:12:45.987 "num_base_bdevs_discovered": 3, 00:12:45.987 "num_base_bdevs_operational": 3, 00:12:45.987 "base_bdevs_list": [ 00:12:45.987 { 00:12:45.987 "name": null, 00:12:45.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.987 "is_configured": false, 00:12:45.987 "data_offset": 0, 00:12:45.987 "data_size": 65536 00:12:45.987 }, 00:12:45.987 { 00:12:45.987 "name": "BaseBdev2", 00:12:45.987 "uuid": "47e49658-a1a9-4ba8-92d0-007c656997bc", 00:12:45.987 "is_configured": true, 00:12:45.987 "data_offset": 0, 00:12:45.987 "data_size": 65536 00:12:45.987 }, 00:12:45.987 { 00:12:45.987 "name": "BaseBdev3", 00:12:45.987 "uuid": "3eda5f29-64cb-42b4-a0f7-acf781405722", 00:12:45.987 "is_configured": true, 00:12:45.987 "data_offset": 0, 00:12:45.987 "data_size": 65536 00:12:45.987 }, 00:12:45.987 { 00:12:45.987 "name": "BaseBdev4", 00:12:45.987 "uuid": "1a1be2d0-e076-4783-849e-a32d0dede364", 00:12:45.987 "is_configured": true, 00:12:45.987 "data_offset": 0, 00:12:45.987 "data_size": 65536 00:12:45.987 } 00:12:45.987 ] 00:12:45.987 }' 00:12:45.987 14:29:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.987 14:29:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.245 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.504 [2024-11-20 14:29:47.302675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.504 [2024-11-20 14:29:47.448670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.504 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 [2024-11-20 14:29:47.594459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:46.763 [2024-11-20 14:29:47.594589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:46.763 [2024-11-20 14:29:47.679251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:46.763 [2024-11-20 14:29:47.679547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:46.763 [2024-11-20 14:29:47.679722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 BaseBdev2 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.763 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.763 [ 00:12:46.763 { 00:12:46.763 "name": "BaseBdev2", 00:12:46.763 "aliases": [ 00:12:46.763 "2ede4e4b-be30-4700-a98a-8d2abfacfda4" 00:12:46.763 ], 00:12:46.763 "product_name": "Malloc disk", 00:12:46.763 "block_size": 512, 00:12:46.763 "num_blocks": 65536, 00:12:46.763 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:46.763 "assigned_rate_limits": { 00:12:46.763 "rw_ios_per_sec": 0, 00:12:46.763 "rw_mbytes_per_sec": 0, 00:12:46.763 "r_mbytes_per_sec": 0, 00:12:46.763 "w_mbytes_per_sec": 0 00:12:46.763 }, 00:12:46.763 "claimed": false, 00:12:46.763 "zoned": false, 00:12:46.763 "supported_io_types": { 00:12:46.763 "read": true, 00:12:46.763 "write": true, 00:12:46.763 "unmap": true, 00:12:46.763 "flush": true, 00:12:46.763 "reset": true, 00:12:46.763 "nvme_admin": false, 00:12:46.763 "nvme_io": false, 00:12:46.764 "nvme_io_md": false, 00:12:46.764 "write_zeroes": true, 00:12:46.764 "zcopy": true, 00:12:46.764 "get_zone_info": false, 00:12:46.764 "zone_management": false, 00:12:46.764 "zone_append": false, 00:12:46.764 "compare": false, 00:12:46.764 "compare_and_write": false, 00:12:46.764 "abort": true, 00:12:46.764 "seek_hole": false, 00:12:46.764 "seek_data": false, 00:12:46.764 "copy": true, 00:12:46.764 "nvme_iov_md": false 00:12:46.764 }, 00:12:46.764 "memory_domains": [ 00:12:46.764 { 00:12:46.764 "dma_device_id": "system", 00:12:46.764 "dma_device_type": 1 00:12:46.764 }, 00:12:46.764 { 00:12:46.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.764 "dma_device_type": 2 00:12:46.764 } 00:12:46.764 ], 00:12:46.764 "driver_specific": {} 00:12:46.764 } 00:12:46.764 ] 00:12:46.764 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.764 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:46.764 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.764 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.764 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.764 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.764 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.022 BaseBdev3 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.023 [ 00:12:47.023 { 00:12:47.023 "name": "BaseBdev3", 00:12:47.023 "aliases": [ 00:12:47.023 "48c25680-e915-48a9-9f34-b7ddd5238af5" 00:12:47.023 ], 00:12:47.023 "product_name": "Malloc disk", 00:12:47.023 "block_size": 512, 00:12:47.023 "num_blocks": 65536, 00:12:47.023 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:47.023 "assigned_rate_limits": { 00:12:47.023 "rw_ios_per_sec": 0, 00:12:47.023 "rw_mbytes_per_sec": 0, 00:12:47.023 "r_mbytes_per_sec": 0, 00:12:47.023 "w_mbytes_per_sec": 0 00:12:47.023 }, 00:12:47.023 "claimed": false, 00:12:47.023 "zoned": false, 00:12:47.023 "supported_io_types": { 00:12:47.023 "read": true, 00:12:47.023 "write": true, 00:12:47.023 "unmap": true, 00:12:47.023 "flush": true, 00:12:47.023 "reset": true, 00:12:47.023 "nvme_admin": false, 00:12:47.023 "nvme_io": false, 00:12:47.023 "nvme_io_md": false, 00:12:47.023 "write_zeroes": true, 00:12:47.023 "zcopy": true, 00:12:47.023 "get_zone_info": false, 00:12:47.023 "zone_management": false, 00:12:47.023 "zone_append": false, 00:12:47.023 "compare": false, 00:12:47.023 "compare_and_write": false, 00:12:47.023 "abort": true, 00:12:47.023 "seek_hole": false, 00:12:47.023 "seek_data": false, 00:12:47.023 "copy": true, 00:12:47.023 "nvme_iov_md": false 00:12:47.023 }, 00:12:47.023 "memory_domains": [ 00:12:47.023 { 00:12:47.023 "dma_device_id": "system", 00:12:47.023 "dma_device_type": 1 00:12:47.023 }, 00:12:47.023 { 00:12:47.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.023 "dma_device_type": 2 00:12:47.023 } 00:12:47.023 ], 00:12:47.023 "driver_specific": {} 00:12:47.023 } 00:12:47.023 ] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.023 BaseBdev4 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.023 [ 00:12:47.023 { 00:12:47.023 "name": "BaseBdev4", 00:12:47.023 "aliases": [ 00:12:47.023 "2633337c-c258-4754-8d3c-d7589b5c7b69" 00:12:47.023 ], 00:12:47.023 "product_name": "Malloc disk", 00:12:47.023 "block_size": 512, 00:12:47.023 "num_blocks": 65536, 00:12:47.023 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:47.023 "assigned_rate_limits": { 00:12:47.023 "rw_ios_per_sec": 0, 00:12:47.023 "rw_mbytes_per_sec": 0, 00:12:47.023 "r_mbytes_per_sec": 0, 00:12:47.023 "w_mbytes_per_sec": 0 00:12:47.023 }, 00:12:47.023 "claimed": false, 00:12:47.023 "zoned": false, 00:12:47.023 "supported_io_types": { 00:12:47.023 "read": true, 00:12:47.023 "write": true, 00:12:47.023 "unmap": true, 00:12:47.023 "flush": true, 00:12:47.023 "reset": true, 00:12:47.023 "nvme_admin": false, 00:12:47.023 "nvme_io": false, 00:12:47.023 "nvme_io_md": false, 00:12:47.023 "write_zeroes": true, 00:12:47.023 "zcopy": true, 00:12:47.023 "get_zone_info": false, 00:12:47.023 "zone_management": false, 00:12:47.023 "zone_append": false, 00:12:47.023 "compare": false, 00:12:47.023 "compare_and_write": false, 00:12:47.023 "abort": true, 00:12:47.023 "seek_hole": false, 00:12:47.023 "seek_data": false, 00:12:47.023 "copy": true, 00:12:47.023 "nvme_iov_md": false 00:12:47.023 }, 00:12:47.023 "memory_domains": [ 00:12:47.023 { 00:12:47.023 "dma_device_id": "system", 00:12:47.023 "dma_device_type": 1 00:12:47.023 }, 00:12:47.023 { 00:12:47.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.023 "dma_device_type": 2 00:12:47.023 } 00:12:47.023 ], 00:12:47.023 "driver_specific": {} 00:12:47.023 } 00:12:47.023 ] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.023 [2024-11-20 14:29:47.972344] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.023 [2024-11-20 14:29:47.973131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.023 [2024-11-20 14:29:47.973288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.023 [2024-11-20 14:29:47.975840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.023 [2024-11-20 14:29:47.976032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.023 14:29:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.024 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.024 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.024 14:29:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.024 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.024 "name": "Existed_Raid", 00:12:47.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.024 "strip_size_kb": 0, 00:12:47.024 "state": "configuring", 00:12:47.024 "raid_level": "raid1", 00:12:47.024 "superblock": false, 00:12:47.024 "num_base_bdevs": 4, 00:12:47.024 "num_base_bdevs_discovered": 3, 00:12:47.024 "num_base_bdevs_operational": 4, 00:12:47.024 "base_bdevs_list": [ 00:12:47.024 { 00:12:47.024 "name": "BaseBdev1", 00:12:47.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.024 "is_configured": false, 00:12:47.024 "data_offset": 0, 00:12:47.024 "data_size": 0 00:12:47.024 }, 00:12:47.024 { 00:12:47.024 "name": "BaseBdev2", 00:12:47.024 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:47.024 "is_configured": true, 00:12:47.024 "data_offset": 0, 00:12:47.024 "data_size": 65536 00:12:47.024 }, 00:12:47.024 { 00:12:47.024 "name": "BaseBdev3", 00:12:47.024 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:47.024 "is_configured": true, 00:12:47.024 "data_offset": 0, 00:12:47.024 "data_size": 65536 00:12:47.024 }, 00:12:47.024 { 00:12:47.024 "name": "BaseBdev4", 00:12:47.024 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:47.024 "is_configured": true, 00:12:47.024 "data_offset": 0, 00:12:47.024 "data_size": 65536 00:12:47.024 } 00:12:47.024 ] 00:12:47.024 }' 00:12:47.024 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.024 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.590 [2024-11-20 14:29:48.484588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.590 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.591 "name": "Existed_Raid", 00:12:47.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.591 "strip_size_kb": 0, 00:12:47.591 "state": "configuring", 00:12:47.591 "raid_level": "raid1", 00:12:47.591 "superblock": false, 00:12:47.591 "num_base_bdevs": 4, 00:12:47.591 "num_base_bdevs_discovered": 2, 00:12:47.591 "num_base_bdevs_operational": 4, 00:12:47.591 "base_bdevs_list": [ 00:12:47.591 { 00:12:47.591 "name": "BaseBdev1", 00:12:47.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.591 "is_configured": false, 00:12:47.591 "data_offset": 0, 00:12:47.591 "data_size": 0 00:12:47.591 }, 00:12:47.591 { 00:12:47.591 "name": null, 00:12:47.591 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:47.591 "is_configured": false, 00:12:47.591 "data_offset": 0, 00:12:47.591 "data_size": 65536 00:12:47.591 }, 00:12:47.591 { 00:12:47.591 "name": "BaseBdev3", 00:12:47.591 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:47.591 "is_configured": true, 00:12:47.591 "data_offset": 0, 00:12:47.591 "data_size": 65536 00:12:47.591 }, 00:12:47.591 { 00:12:47.591 "name": "BaseBdev4", 00:12:47.591 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:47.591 "is_configured": true, 00:12:47.591 "data_offset": 0, 00:12:47.591 "data_size": 65536 00:12:47.591 } 00:12:47.591 ] 00:12:47.591 }' 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.591 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.158 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.158 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.158 14:29:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.158 14:29:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.158 [2024-11-20 14:29:49.087280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.158 BaseBdev1 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:48.158 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.159 [ 00:12:48.159 { 00:12:48.159 "name": "BaseBdev1", 00:12:48.159 "aliases": [ 00:12:48.159 "a6f78145-4457-4249-b537-2e082eadc3d6" 00:12:48.159 ], 00:12:48.159 "product_name": "Malloc disk", 00:12:48.159 "block_size": 512, 00:12:48.159 "num_blocks": 65536, 00:12:48.159 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:48.159 "assigned_rate_limits": { 00:12:48.159 "rw_ios_per_sec": 0, 00:12:48.159 "rw_mbytes_per_sec": 0, 00:12:48.159 "r_mbytes_per_sec": 0, 00:12:48.159 "w_mbytes_per_sec": 0 00:12:48.159 }, 00:12:48.159 "claimed": true, 00:12:48.159 "claim_type": "exclusive_write", 00:12:48.159 "zoned": false, 00:12:48.159 "supported_io_types": { 00:12:48.159 "read": true, 00:12:48.159 "write": true, 00:12:48.159 "unmap": true, 00:12:48.159 "flush": true, 00:12:48.159 "reset": true, 00:12:48.159 "nvme_admin": false, 00:12:48.159 "nvme_io": false, 00:12:48.159 "nvme_io_md": false, 00:12:48.159 "write_zeroes": true, 00:12:48.159 "zcopy": true, 00:12:48.159 "get_zone_info": false, 00:12:48.159 "zone_management": false, 00:12:48.159 "zone_append": false, 00:12:48.159 "compare": false, 00:12:48.159 "compare_and_write": false, 00:12:48.159 "abort": true, 00:12:48.159 "seek_hole": false, 00:12:48.159 "seek_data": false, 00:12:48.159 "copy": true, 00:12:48.159 "nvme_iov_md": false 00:12:48.159 }, 00:12:48.159 "memory_domains": [ 00:12:48.159 { 00:12:48.159 "dma_device_id": "system", 00:12:48.159 "dma_device_type": 1 00:12:48.159 }, 00:12:48.159 { 00:12:48.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.159 "dma_device_type": 2 00:12:48.159 } 00:12:48.159 ], 00:12:48.159 "driver_specific": {} 00:12:48.159 } 00:12:48.159 ] 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.159 "name": "Existed_Raid", 00:12:48.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.159 "strip_size_kb": 0, 00:12:48.159 "state": "configuring", 00:12:48.159 "raid_level": "raid1", 00:12:48.159 "superblock": false, 00:12:48.159 "num_base_bdevs": 4, 00:12:48.159 "num_base_bdevs_discovered": 3, 00:12:48.159 "num_base_bdevs_operational": 4, 00:12:48.159 "base_bdevs_list": [ 00:12:48.159 { 00:12:48.159 "name": "BaseBdev1", 00:12:48.159 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:48.159 "is_configured": true, 00:12:48.159 "data_offset": 0, 00:12:48.159 "data_size": 65536 00:12:48.159 }, 00:12:48.159 { 00:12:48.159 "name": null, 00:12:48.159 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:48.159 "is_configured": false, 00:12:48.159 "data_offset": 0, 00:12:48.159 "data_size": 65536 00:12:48.159 }, 00:12:48.159 { 00:12:48.159 "name": "BaseBdev3", 00:12:48.159 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:48.159 "is_configured": true, 00:12:48.159 "data_offset": 0, 00:12:48.159 "data_size": 65536 00:12:48.159 }, 00:12:48.159 { 00:12:48.159 "name": "BaseBdev4", 00:12:48.159 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:48.159 "is_configured": true, 00:12:48.159 "data_offset": 0, 00:12:48.159 "data_size": 65536 00:12:48.159 } 00:12:48.159 ] 00:12:48.159 }' 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.159 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.724 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.724 [2024-11-20 14:29:49.679558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.725 "name": "Existed_Raid", 00:12:48.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.725 "strip_size_kb": 0, 00:12:48.725 "state": "configuring", 00:12:48.725 "raid_level": "raid1", 00:12:48.725 "superblock": false, 00:12:48.725 "num_base_bdevs": 4, 00:12:48.725 "num_base_bdevs_discovered": 2, 00:12:48.725 "num_base_bdevs_operational": 4, 00:12:48.725 "base_bdevs_list": [ 00:12:48.725 { 00:12:48.725 "name": "BaseBdev1", 00:12:48.725 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:48.725 "is_configured": true, 00:12:48.725 "data_offset": 0, 00:12:48.725 "data_size": 65536 00:12:48.725 }, 00:12:48.725 { 00:12:48.725 "name": null, 00:12:48.725 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:48.725 "is_configured": false, 00:12:48.725 "data_offset": 0, 00:12:48.725 "data_size": 65536 00:12:48.725 }, 00:12:48.725 { 00:12:48.725 "name": null, 00:12:48.725 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:48.725 "is_configured": false, 00:12:48.725 "data_offset": 0, 00:12:48.725 "data_size": 65536 00:12:48.725 }, 00:12:48.725 { 00:12:48.725 "name": "BaseBdev4", 00:12:48.725 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:48.725 "is_configured": true, 00:12:48.725 "data_offset": 0, 00:12:48.725 "data_size": 65536 00:12:48.725 } 00:12:48.725 ] 00:12:48.725 }' 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.725 14:29:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.295 [2024-11-20 14:29:50.255754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.295 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.295 "name": "Existed_Raid", 00:12:49.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.295 "strip_size_kb": 0, 00:12:49.295 "state": "configuring", 00:12:49.295 "raid_level": "raid1", 00:12:49.295 "superblock": false, 00:12:49.295 "num_base_bdevs": 4, 00:12:49.295 "num_base_bdevs_discovered": 3, 00:12:49.295 "num_base_bdevs_operational": 4, 00:12:49.295 "base_bdevs_list": [ 00:12:49.295 { 00:12:49.295 "name": "BaseBdev1", 00:12:49.295 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:49.295 "is_configured": true, 00:12:49.295 "data_offset": 0, 00:12:49.295 "data_size": 65536 00:12:49.295 }, 00:12:49.295 { 00:12:49.295 "name": null, 00:12:49.295 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:49.295 "is_configured": false, 00:12:49.295 "data_offset": 0, 00:12:49.295 "data_size": 65536 00:12:49.295 }, 00:12:49.295 { 00:12:49.295 "name": "BaseBdev3", 00:12:49.295 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:49.295 "is_configured": true, 00:12:49.295 "data_offset": 0, 00:12:49.295 "data_size": 65536 00:12:49.295 }, 00:12:49.295 { 00:12:49.295 "name": "BaseBdev4", 00:12:49.295 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:49.295 "is_configured": true, 00:12:49.296 "data_offset": 0, 00:12:49.296 "data_size": 65536 00:12:49.296 } 00:12:49.296 ] 00:12:49.296 }' 00:12:49.296 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.296 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.863 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.863 [2024-11-20 14:29:50.839932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.122 "name": "Existed_Raid", 00:12:50.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.122 "strip_size_kb": 0, 00:12:50.122 "state": "configuring", 00:12:50.122 "raid_level": "raid1", 00:12:50.122 "superblock": false, 00:12:50.122 "num_base_bdevs": 4, 00:12:50.122 "num_base_bdevs_discovered": 2, 00:12:50.122 "num_base_bdevs_operational": 4, 00:12:50.122 "base_bdevs_list": [ 00:12:50.122 { 00:12:50.122 "name": null, 00:12:50.122 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:50.122 "is_configured": false, 00:12:50.122 "data_offset": 0, 00:12:50.122 "data_size": 65536 00:12:50.122 }, 00:12:50.122 { 00:12:50.122 "name": null, 00:12:50.122 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:50.122 "is_configured": false, 00:12:50.122 "data_offset": 0, 00:12:50.122 "data_size": 65536 00:12:50.122 }, 00:12:50.122 { 00:12:50.122 "name": "BaseBdev3", 00:12:50.122 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:50.122 "is_configured": true, 00:12:50.122 "data_offset": 0, 00:12:50.122 "data_size": 65536 00:12:50.122 }, 00:12:50.122 { 00:12:50.122 "name": "BaseBdev4", 00:12:50.122 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:50.122 "is_configured": true, 00:12:50.122 "data_offset": 0, 00:12:50.122 "data_size": 65536 00:12:50.122 } 00:12:50.122 ] 00:12:50.122 }' 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.122 14:29:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.380 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.380 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.638 [2024-11-20 14:29:51.479378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.638 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.639 "name": "Existed_Raid", 00:12:50.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.639 "strip_size_kb": 0, 00:12:50.639 "state": "configuring", 00:12:50.639 "raid_level": "raid1", 00:12:50.639 "superblock": false, 00:12:50.639 "num_base_bdevs": 4, 00:12:50.639 "num_base_bdevs_discovered": 3, 00:12:50.639 "num_base_bdevs_operational": 4, 00:12:50.639 "base_bdevs_list": [ 00:12:50.639 { 00:12:50.639 "name": null, 00:12:50.639 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:50.639 "is_configured": false, 00:12:50.639 "data_offset": 0, 00:12:50.639 "data_size": 65536 00:12:50.639 }, 00:12:50.639 { 00:12:50.639 "name": "BaseBdev2", 00:12:50.639 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:50.639 "is_configured": true, 00:12:50.639 "data_offset": 0, 00:12:50.639 "data_size": 65536 00:12:50.639 }, 00:12:50.639 { 00:12:50.639 "name": "BaseBdev3", 00:12:50.639 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:50.639 "is_configured": true, 00:12:50.639 "data_offset": 0, 00:12:50.639 "data_size": 65536 00:12:50.639 }, 00:12:50.639 { 00:12:50.639 "name": "BaseBdev4", 00:12:50.639 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:50.639 "is_configured": true, 00:12:50.639 "data_offset": 0, 00:12:50.639 "data_size": 65536 00:12:50.639 } 00:12:50.639 ] 00:12:50.639 }' 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.639 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.207 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 14:29:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:51.207 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 14:29:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a6f78145-4457-4249-b537-2e082eadc3d6 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 [2024-11-20 14:29:52.120083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:51.207 [2024-11-20 14:29:52.120144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:51.207 [2024-11-20 14:29:52.120161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:51.207 [2024-11-20 14:29:52.120588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:51.207 [2024-11-20 14:29:52.120828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:51.207 [2024-11-20 14:29:52.120846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:51.207 [2024-11-20 14:29:52.121159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.207 NewBaseBdev 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 [ 00:12:51.207 { 00:12:51.207 "name": "NewBaseBdev", 00:12:51.207 "aliases": [ 00:12:51.207 "a6f78145-4457-4249-b537-2e082eadc3d6" 00:12:51.207 ], 00:12:51.207 "product_name": "Malloc disk", 00:12:51.207 "block_size": 512, 00:12:51.207 "num_blocks": 65536, 00:12:51.207 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:51.207 "assigned_rate_limits": { 00:12:51.207 "rw_ios_per_sec": 0, 00:12:51.207 "rw_mbytes_per_sec": 0, 00:12:51.207 "r_mbytes_per_sec": 0, 00:12:51.207 "w_mbytes_per_sec": 0 00:12:51.207 }, 00:12:51.207 "claimed": true, 00:12:51.207 "claim_type": "exclusive_write", 00:12:51.207 "zoned": false, 00:12:51.207 "supported_io_types": { 00:12:51.207 "read": true, 00:12:51.207 "write": true, 00:12:51.207 "unmap": true, 00:12:51.207 "flush": true, 00:12:51.207 "reset": true, 00:12:51.207 "nvme_admin": false, 00:12:51.207 "nvme_io": false, 00:12:51.207 "nvme_io_md": false, 00:12:51.207 "write_zeroes": true, 00:12:51.207 "zcopy": true, 00:12:51.207 "get_zone_info": false, 00:12:51.207 "zone_management": false, 00:12:51.207 "zone_append": false, 00:12:51.207 "compare": false, 00:12:51.207 "compare_and_write": false, 00:12:51.207 "abort": true, 00:12:51.207 "seek_hole": false, 00:12:51.207 "seek_data": false, 00:12:51.207 "copy": true, 00:12:51.207 "nvme_iov_md": false 00:12:51.207 }, 00:12:51.207 "memory_domains": [ 00:12:51.207 { 00:12:51.207 "dma_device_id": "system", 00:12:51.207 "dma_device_type": 1 00:12:51.207 }, 00:12:51.207 { 00:12:51.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.207 "dma_device_type": 2 00:12:51.207 } 00:12:51.207 ], 00:12:51.207 "driver_specific": {} 00:12:51.207 } 00:12:51.207 ] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.207 "name": "Existed_Raid", 00:12:51.207 "uuid": "26e18597-023a-4287-94bb-7a289c5c2e49", 00:12:51.207 "strip_size_kb": 0, 00:12:51.207 "state": "online", 00:12:51.207 "raid_level": "raid1", 00:12:51.207 "superblock": false, 00:12:51.207 "num_base_bdevs": 4, 00:12:51.207 "num_base_bdevs_discovered": 4, 00:12:51.207 "num_base_bdevs_operational": 4, 00:12:51.207 "base_bdevs_list": [ 00:12:51.207 { 00:12:51.207 "name": "NewBaseBdev", 00:12:51.207 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:51.207 "is_configured": true, 00:12:51.207 "data_offset": 0, 00:12:51.207 "data_size": 65536 00:12:51.207 }, 00:12:51.207 { 00:12:51.207 "name": "BaseBdev2", 00:12:51.207 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:51.207 "is_configured": true, 00:12:51.207 "data_offset": 0, 00:12:51.207 "data_size": 65536 00:12:51.207 }, 00:12:51.207 { 00:12:51.207 "name": "BaseBdev3", 00:12:51.207 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:51.208 "is_configured": true, 00:12:51.208 "data_offset": 0, 00:12:51.208 "data_size": 65536 00:12:51.208 }, 00:12:51.208 { 00:12:51.208 "name": "BaseBdev4", 00:12:51.208 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:51.208 "is_configured": true, 00:12:51.208 "data_offset": 0, 00:12:51.208 "data_size": 65536 00:12:51.208 } 00:12:51.208 ] 00:12:51.208 }' 00:12:51.208 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.208 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.774 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.774 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.774 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.775 [2024-11-20 14:29:52.676742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.775 "name": "Existed_Raid", 00:12:51.775 "aliases": [ 00:12:51.775 "26e18597-023a-4287-94bb-7a289c5c2e49" 00:12:51.775 ], 00:12:51.775 "product_name": "Raid Volume", 00:12:51.775 "block_size": 512, 00:12:51.775 "num_blocks": 65536, 00:12:51.775 "uuid": "26e18597-023a-4287-94bb-7a289c5c2e49", 00:12:51.775 "assigned_rate_limits": { 00:12:51.775 "rw_ios_per_sec": 0, 00:12:51.775 "rw_mbytes_per_sec": 0, 00:12:51.775 "r_mbytes_per_sec": 0, 00:12:51.775 "w_mbytes_per_sec": 0 00:12:51.775 }, 00:12:51.775 "claimed": false, 00:12:51.775 "zoned": false, 00:12:51.775 "supported_io_types": { 00:12:51.775 "read": true, 00:12:51.775 "write": true, 00:12:51.775 "unmap": false, 00:12:51.775 "flush": false, 00:12:51.775 "reset": true, 00:12:51.775 "nvme_admin": false, 00:12:51.775 "nvme_io": false, 00:12:51.775 "nvme_io_md": false, 00:12:51.775 "write_zeroes": true, 00:12:51.775 "zcopy": false, 00:12:51.775 "get_zone_info": false, 00:12:51.775 "zone_management": false, 00:12:51.775 "zone_append": false, 00:12:51.775 "compare": false, 00:12:51.775 "compare_and_write": false, 00:12:51.775 "abort": false, 00:12:51.775 "seek_hole": false, 00:12:51.775 "seek_data": false, 00:12:51.775 "copy": false, 00:12:51.775 "nvme_iov_md": false 00:12:51.775 }, 00:12:51.775 "memory_domains": [ 00:12:51.775 { 00:12:51.775 "dma_device_id": "system", 00:12:51.775 "dma_device_type": 1 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.775 "dma_device_type": 2 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "dma_device_id": "system", 00:12:51.775 "dma_device_type": 1 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.775 "dma_device_type": 2 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "dma_device_id": "system", 00:12:51.775 "dma_device_type": 1 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.775 "dma_device_type": 2 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "dma_device_id": "system", 00:12:51.775 "dma_device_type": 1 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.775 "dma_device_type": 2 00:12:51.775 } 00:12:51.775 ], 00:12:51.775 "driver_specific": { 00:12:51.775 "raid": { 00:12:51.775 "uuid": "26e18597-023a-4287-94bb-7a289c5c2e49", 00:12:51.775 "strip_size_kb": 0, 00:12:51.775 "state": "online", 00:12:51.775 "raid_level": "raid1", 00:12:51.775 "superblock": false, 00:12:51.775 "num_base_bdevs": 4, 00:12:51.775 "num_base_bdevs_discovered": 4, 00:12:51.775 "num_base_bdevs_operational": 4, 00:12:51.775 "base_bdevs_list": [ 00:12:51.775 { 00:12:51.775 "name": "NewBaseBdev", 00:12:51.775 "uuid": "a6f78145-4457-4249-b537-2e082eadc3d6", 00:12:51.775 "is_configured": true, 00:12:51.775 "data_offset": 0, 00:12:51.775 "data_size": 65536 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "name": "BaseBdev2", 00:12:51.775 "uuid": "2ede4e4b-be30-4700-a98a-8d2abfacfda4", 00:12:51.775 "is_configured": true, 00:12:51.775 "data_offset": 0, 00:12:51.775 "data_size": 65536 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "name": "BaseBdev3", 00:12:51.775 "uuid": "48c25680-e915-48a9-9f34-b7ddd5238af5", 00:12:51.775 "is_configured": true, 00:12:51.775 "data_offset": 0, 00:12:51.775 "data_size": 65536 00:12:51.775 }, 00:12:51.775 { 00:12:51.775 "name": "BaseBdev4", 00:12:51.775 "uuid": "2633337c-c258-4754-8d3c-d7589b5c7b69", 00:12:51.775 "is_configured": true, 00:12:51.775 "data_offset": 0, 00:12:51.775 "data_size": 65536 00:12:51.775 } 00:12:51.775 ] 00:12:51.775 } 00:12:51.775 } 00:12:51.775 }' 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:51.775 BaseBdev2 00:12:51.775 BaseBdev3 00:12:51.775 BaseBdev4' 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.775 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.034 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.034 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.034 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.034 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.034 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 14:29:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 [2024-11-20 14:29:53.040396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.035 [2024-11-20 14:29:53.040559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.035 [2024-11-20 14:29:53.040710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.035 [2024-11-20 14:29:53.041109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.035 [2024-11-20 14:29:53.041134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73407 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73407 ']' 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73407 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73407 00:12:52.035 killing process with pid 73407 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73407' 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73407 00:12:52.035 [2024-11-20 14:29:53.077243] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:52.035 14:29:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73407 00:12:52.602 [2024-11-20 14:29:53.432195] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.539 ************************************ 00:12:53.539 END TEST raid_state_function_test 00:12:53.539 ************************************ 00:12:53.539 14:29:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:53.539 00:12:53.539 real 0m12.804s 00:12:53.539 user 0m21.175s 00:12:53.539 sys 0m1.776s 00:12:53.539 14:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.539 14:29:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.539 14:29:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:53.539 14:29:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:53.539 14:29:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.539 14:29:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.798 ************************************ 00:12:53.798 START TEST raid_state_function_test_sb 00:12:53.798 ************************************ 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74089 00:12:53.798 Process raid pid: 74089 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74089' 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74089 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74089 ']' 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.798 14:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.798 [2024-11-20 14:29:54.725258] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:12:53.798 [2024-11-20 14:29:54.725440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.059 [2024-11-20 14:29:54.917498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.059 [2024-11-20 14:29:55.062362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.322 [2024-11-20 14:29:55.298540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.322 [2024-11-20 14:29:55.298595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.888 [2024-11-20 14:29:55.721847] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.888 [2024-11-20 14:29:55.721948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.888 [2024-11-20 14:29:55.721969] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.888 [2024-11-20 14:29:55.721987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.888 [2024-11-20 14:29:55.721998] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:54.888 [2024-11-20 14:29:55.722013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:54.888 [2024-11-20 14:29:55.722030] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:54.888 [2024-11-20 14:29:55.722047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.888 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.889 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.889 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.889 "name": "Existed_Raid", 00:12:54.889 "uuid": "966cca3b-2d13-4559-af25-7f103e50844c", 00:12:54.889 "strip_size_kb": 0, 00:12:54.889 "state": "configuring", 00:12:54.889 "raid_level": "raid1", 00:12:54.889 "superblock": true, 00:12:54.889 "num_base_bdevs": 4, 00:12:54.889 "num_base_bdevs_discovered": 0, 00:12:54.889 "num_base_bdevs_operational": 4, 00:12:54.889 "base_bdevs_list": [ 00:12:54.889 { 00:12:54.889 "name": "BaseBdev1", 00:12:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.889 "is_configured": false, 00:12:54.889 "data_offset": 0, 00:12:54.889 "data_size": 0 00:12:54.889 }, 00:12:54.889 { 00:12:54.889 "name": "BaseBdev2", 00:12:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.889 "is_configured": false, 00:12:54.889 "data_offset": 0, 00:12:54.889 "data_size": 0 00:12:54.889 }, 00:12:54.889 { 00:12:54.889 "name": "BaseBdev3", 00:12:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.889 "is_configured": false, 00:12:54.889 "data_offset": 0, 00:12:54.889 "data_size": 0 00:12:54.889 }, 00:12:54.889 { 00:12:54.889 "name": "BaseBdev4", 00:12:54.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.889 "is_configured": false, 00:12:54.889 "data_offset": 0, 00:12:54.889 "data_size": 0 00:12:54.889 } 00:12:54.889 ] 00:12:54.889 }' 00:12:54.889 14:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.889 14:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 [2024-11-20 14:29:56.229925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.456 [2024-11-20 14:29:56.230730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 [2024-11-20 14:29:56.241908] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.456 [2024-11-20 14:29:56.241968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.456 [2024-11-20 14:29:56.241985] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.456 [2024-11-20 14:29:56.242002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.456 [2024-11-20 14:29:56.242012] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:55.456 [2024-11-20 14:29:56.242027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:55.456 [2024-11-20 14:29:56.242037] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:55.456 [2024-11-20 14:29:56.242051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 [2024-11-20 14:29:56.289582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.456 BaseBdev1 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.456 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.456 [ 00:12:55.456 { 00:12:55.456 "name": "BaseBdev1", 00:12:55.456 "aliases": [ 00:12:55.456 "18b0077c-47c5-4632-b1fb-db5389a709c2" 00:12:55.456 ], 00:12:55.456 "product_name": "Malloc disk", 00:12:55.456 "block_size": 512, 00:12:55.456 "num_blocks": 65536, 00:12:55.456 "uuid": "18b0077c-47c5-4632-b1fb-db5389a709c2", 00:12:55.456 "assigned_rate_limits": { 00:12:55.456 "rw_ios_per_sec": 0, 00:12:55.456 "rw_mbytes_per_sec": 0, 00:12:55.456 "r_mbytes_per_sec": 0, 00:12:55.456 "w_mbytes_per_sec": 0 00:12:55.456 }, 00:12:55.456 "claimed": true, 00:12:55.456 "claim_type": "exclusive_write", 00:12:55.456 "zoned": false, 00:12:55.456 "supported_io_types": { 00:12:55.456 "read": true, 00:12:55.456 "write": true, 00:12:55.456 "unmap": true, 00:12:55.456 "flush": true, 00:12:55.456 "reset": true, 00:12:55.456 "nvme_admin": false, 00:12:55.456 "nvme_io": false, 00:12:55.456 "nvme_io_md": false, 00:12:55.456 "write_zeroes": true, 00:12:55.456 "zcopy": true, 00:12:55.456 "get_zone_info": false, 00:12:55.456 "zone_management": false, 00:12:55.456 "zone_append": false, 00:12:55.456 "compare": false, 00:12:55.456 "compare_and_write": false, 00:12:55.457 "abort": true, 00:12:55.457 "seek_hole": false, 00:12:55.457 "seek_data": false, 00:12:55.457 "copy": true, 00:12:55.457 "nvme_iov_md": false 00:12:55.457 }, 00:12:55.457 "memory_domains": [ 00:12:55.457 { 00:12:55.457 "dma_device_id": "system", 00:12:55.457 "dma_device_type": 1 00:12:55.457 }, 00:12:55.457 { 00:12:55.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.457 "dma_device_type": 2 00:12:55.457 } 00:12:55.457 ], 00:12:55.457 "driver_specific": {} 00:12:55.457 } 00:12:55.457 ] 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.457 "name": "Existed_Raid", 00:12:55.457 "uuid": "86a2a9cf-dcbf-4064-bc45-fc446c889e2a", 00:12:55.457 "strip_size_kb": 0, 00:12:55.457 "state": "configuring", 00:12:55.457 "raid_level": "raid1", 00:12:55.457 "superblock": true, 00:12:55.457 "num_base_bdevs": 4, 00:12:55.457 "num_base_bdevs_discovered": 1, 00:12:55.457 "num_base_bdevs_operational": 4, 00:12:55.457 "base_bdevs_list": [ 00:12:55.457 { 00:12:55.457 "name": "BaseBdev1", 00:12:55.457 "uuid": "18b0077c-47c5-4632-b1fb-db5389a709c2", 00:12:55.457 "is_configured": true, 00:12:55.457 "data_offset": 2048, 00:12:55.457 "data_size": 63488 00:12:55.457 }, 00:12:55.457 { 00:12:55.457 "name": "BaseBdev2", 00:12:55.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.457 "is_configured": false, 00:12:55.457 "data_offset": 0, 00:12:55.457 "data_size": 0 00:12:55.457 }, 00:12:55.457 { 00:12:55.457 "name": "BaseBdev3", 00:12:55.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.457 "is_configured": false, 00:12:55.457 "data_offset": 0, 00:12:55.457 "data_size": 0 00:12:55.457 }, 00:12:55.457 { 00:12:55.457 "name": "BaseBdev4", 00:12:55.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.457 "is_configured": false, 00:12:55.457 "data_offset": 0, 00:12:55.457 "data_size": 0 00:12:55.457 } 00:12:55.457 ] 00:12:55.457 }' 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.457 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.023 [2024-11-20 14:29:56.809820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.023 [2024-11-20 14:29:56.810054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.023 [2024-11-20 14:29:56.817863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.023 [2024-11-20 14:29:56.820385] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.023 [2024-11-20 14:29:56.820444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.023 [2024-11-20 14:29:56.820460] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.023 [2024-11-20 14:29:56.820479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.023 [2024-11-20 14:29:56.820490] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:56.023 [2024-11-20 14:29:56.820505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.023 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.023 "name": "Existed_Raid", 00:12:56.023 "uuid": "1251bc9a-b2bd-47ed-b90b-8290efaa3597", 00:12:56.023 "strip_size_kb": 0, 00:12:56.023 "state": "configuring", 00:12:56.023 "raid_level": "raid1", 00:12:56.023 "superblock": true, 00:12:56.023 "num_base_bdevs": 4, 00:12:56.023 "num_base_bdevs_discovered": 1, 00:12:56.023 "num_base_bdevs_operational": 4, 00:12:56.023 "base_bdevs_list": [ 00:12:56.023 { 00:12:56.023 "name": "BaseBdev1", 00:12:56.023 "uuid": "18b0077c-47c5-4632-b1fb-db5389a709c2", 00:12:56.023 "is_configured": true, 00:12:56.023 "data_offset": 2048, 00:12:56.023 "data_size": 63488 00:12:56.023 }, 00:12:56.023 { 00:12:56.024 "name": "BaseBdev2", 00:12:56.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.024 "is_configured": false, 00:12:56.024 "data_offset": 0, 00:12:56.024 "data_size": 0 00:12:56.024 }, 00:12:56.024 { 00:12:56.024 "name": "BaseBdev3", 00:12:56.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.024 "is_configured": false, 00:12:56.024 "data_offset": 0, 00:12:56.024 "data_size": 0 00:12:56.024 }, 00:12:56.024 { 00:12:56.024 "name": "BaseBdev4", 00:12:56.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.024 "is_configured": false, 00:12:56.024 "data_offset": 0, 00:12:56.024 "data_size": 0 00:12:56.024 } 00:12:56.024 ] 00:12:56.024 }' 00:12:56.024 14:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.024 14:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.591 [2024-11-20 14:29:57.405696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.591 BaseBdev2 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.591 [ 00:12:56.591 { 00:12:56.591 "name": "BaseBdev2", 00:12:56.591 "aliases": [ 00:12:56.591 "65ce75f0-4bf1-4941-8cf4-c8704bee686c" 00:12:56.591 ], 00:12:56.591 "product_name": "Malloc disk", 00:12:56.591 "block_size": 512, 00:12:56.591 "num_blocks": 65536, 00:12:56.591 "uuid": "65ce75f0-4bf1-4941-8cf4-c8704bee686c", 00:12:56.591 "assigned_rate_limits": { 00:12:56.591 "rw_ios_per_sec": 0, 00:12:56.591 "rw_mbytes_per_sec": 0, 00:12:56.591 "r_mbytes_per_sec": 0, 00:12:56.591 "w_mbytes_per_sec": 0 00:12:56.591 }, 00:12:56.591 "claimed": true, 00:12:56.591 "claim_type": "exclusive_write", 00:12:56.591 "zoned": false, 00:12:56.591 "supported_io_types": { 00:12:56.591 "read": true, 00:12:56.591 "write": true, 00:12:56.591 "unmap": true, 00:12:56.591 "flush": true, 00:12:56.591 "reset": true, 00:12:56.591 "nvme_admin": false, 00:12:56.591 "nvme_io": false, 00:12:56.591 "nvme_io_md": false, 00:12:56.591 "write_zeroes": true, 00:12:56.591 "zcopy": true, 00:12:56.591 "get_zone_info": false, 00:12:56.591 "zone_management": false, 00:12:56.591 "zone_append": false, 00:12:56.591 "compare": false, 00:12:56.591 "compare_and_write": false, 00:12:56.591 "abort": true, 00:12:56.591 "seek_hole": false, 00:12:56.591 "seek_data": false, 00:12:56.591 "copy": true, 00:12:56.591 "nvme_iov_md": false 00:12:56.591 }, 00:12:56.591 "memory_domains": [ 00:12:56.591 { 00:12:56.591 "dma_device_id": "system", 00:12:56.591 "dma_device_type": 1 00:12:56.591 }, 00:12:56.591 { 00:12:56.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.591 "dma_device_type": 2 00:12:56.591 } 00:12:56.591 ], 00:12:56.591 "driver_specific": {} 00:12:56.591 } 00:12:56.591 ] 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.591 "name": "Existed_Raid", 00:12:56.591 "uuid": "1251bc9a-b2bd-47ed-b90b-8290efaa3597", 00:12:56.591 "strip_size_kb": 0, 00:12:56.591 "state": "configuring", 00:12:56.591 "raid_level": "raid1", 00:12:56.591 "superblock": true, 00:12:56.591 "num_base_bdevs": 4, 00:12:56.591 "num_base_bdevs_discovered": 2, 00:12:56.591 "num_base_bdevs_operational": 4, 00:12:56.591 "base_bdevs_list": [ 00:12:56.591 { 00:12:56.591 "name": "BaseBdev1", 00:12:56.591 "uuid": "18b0077c-47c5-4632-b1fb-db5389a709c2", 00:12:56.591 "is_configured": true, 00:12:56.591 "data_offset": 2048, 00:12:56.591 "data_size": 63488 00:12:56.591 }, 00:12:56.591 { 00:12:56.591 "name": "BaseBdev2", 00:12:56.591 "uuid": "65ce75f0-4bf1-4941-8cf4-c8704bee686c", 00:12:56.591 "is_configured": true, 00:12:56.591 "data_offset": 2048, 00:12:56.591 "data_size": 63488 00:12:56.591 }, 00:12:56.591 { 00:12:56.591 "name": "BaseBdev3", 00:12:56.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.591 "is_configured": false, 00:12:56.591 "data_offset": 0, 00:12:56.591 "data_size": 0 00:12:56.591 }, 00:12:56.591 { 00:12:56.591 "name": "BaseBdev4", 00:12:56.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.591 "is_configured": false, 00:12:56.591 "data_offset": 0, 00:12:56.591 "data_size": 0 00:12:56.591 } 00:12:56.591 ] 00:12:56.591 }' 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.591 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.158 14:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:57.158 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.158 14:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.158 [2024-11-20 14:29:58.000258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.158 BaseBdev3 00:12:57.158 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.158 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:57.158 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:57.158 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.158 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:57.158 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.158 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.159 [ 00:12:57.159 { 00:12:57.159 "name": "BaseBdev3", 00:12:57.159 "aliases": [ 00:12:57.159 "7f09883d-42fb-4c2c-aca4-a0e3e549af16" 00:12:57.159 ], 00:12:57.159 "product_name": "Malloc disk", 00:12:57.159 "block_size": 512, 00:12:57.159 "num_blocks": 65536, 00:12:57.159 "uuid": "7f09883d-42fb-4c2c-aca4-a0e3e549af16", 00:12:57.159 "assigned_rate_limits": { 00:12:57.159 "rw_ios_per_sec": 0, 00:12:57.159 "rw_mbytes_per_sec": 0, 00:12:57.159 "r_mbytes_per_sec": 0, 00:12:57.159 "w_mbytes_per_sec": 0 00:12:57.159 }, 00:12:57.159 "claimed": true, 00:12:57.159 "claim_type": "exclusive_write", 00:12:57.159 "zoned": false, 00:12:57.159 "supported_io_types": { 00:12:57.159 "read": true, 00:12:57.159 "write": true, 00:12:57.159 "unmap": true, 00:12:57.159 "flush": true, 00:12:57.159 "reset": true, 00:12:57.159 "nvme_admin": false, 00:12:57.159 "nvme_io": false, 00:12:57.159 "nvme_io_md": false, 00:12:57.159 "write_zeroes": true, 00:12:57.159 "zcopy": true, 00:12:57.159 "get_zone_info": false, 00:12:57.159 "zone_management": false, 00:12:57.159 "zone_append": false, 00:12:57.159 "compare": false, 00:12:57.159 "compare_and_write": false, 00:12:57.159 "abort": true, 00:12:57.159 "seek_hole": false, 00:12:57.159 "seek_data": false, 00:12:57.159 "copy": true, 00:12:57.159 "nvme_iov_md": false 00:12:57.159 }, 00:12:57.159 "memory_domains": [ 00:12:57.159 { 00:12:57.159 "dma_device_id": "system", 00:12:57.159 "dma_device_type": 1 00:12:57.159 }, 00:12:57.159 { 00:12:57.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.159 "dma_device_type": 2 00:12:57.159 } 00:12:57.159 ], 00:12:57.159 "driver_specific": {} 00:12:57.159 } 00:12:57.159 ] 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.159 "name": "Existed_Raid", 00:12:57.159 "uuid": "1251bc9a-b2bd-47ed-b90b-8290efaa3597", 00:12:57.159 "strip_size_kb": 0, 00:12:57.159 "state": "configuring", 00:12:57.159 "raid_level": "raid1", 00:12:57.159 "superblock": true, 00:12:57.159 "num_base_bdevs": 4, 00:12:57.159 "num_base_bdevs_discovered": 3, 00:12:57.159 "num_base_bdevs_operational": 4, 00:12:57.159 "base_bdevs_list": [ 00:12:57.159 { 00:12:57.159 "name": "BaseBdev1", 00:12:57.159 "uuid": "18b0077c-47c5-4632-b1fb-db5389a709c2", 00:12:57.159 "is_configured": true, 00:12:57.159 "data_offset": 2048, 00:12:57.159 "data_size": 63488 00:12:57.159 }, 00:12:57.159 { 00:12:57.159 "name": "BaseBdev2", 00:12:57.159 "uuid": "65ce75f0-4bf1-4941-8cf4-c8704bee686c", 00:12:57.159 "is_configured": true, 00:12:57.159 "data_offset": 2048, 00:12:57.159 "data_size": 63488 00:12:57.159 }, 00:12:57.159 { 00:12:57.159 "name": "BaseBdev3", 00:12:57.159 "uuid": "7f09883d-42fb-4c2c-aca4-a0e3e549af16", 00:12:57.159 "is_configured": true, 00:12:57.159 "data_offset": 2048, 00:12:57.159 "data_size": 63488 00:12:57.159 }, 00:12:57.159 { 00:12:57.159 "name": "BaseBdev4", 00:12:57.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.159 "is_configured": false, 00:12:57.159 "data_offset": 0, 00:12:57.159 "data_size": 0 00:12:57.159 } 00:12:57.159 ] 00:12:57.159 }' 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.159 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.726 [2024-11-20 14:29:58.649531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:57.726 [2024-11-20 14:29:58.649958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:57.726 [2024-11-20 14:29:58.649980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:57.726 BaseBdev4 00:12:57.726 [2024-11-20 14:29:58.650321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:57.726 [2024-11-20 14:29:58.650575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:57.726 [2024-11-20 14:29:58.650599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:57.726 [2024-11-20 14:29:58.650803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.726 [ 00:12:57.726 { 00:12:57.726 "name": "BaseBdev4", 00:12:57.726 "aliases": [ 00:12:57.726 "70c48e78-a301-4cfb-b61f-76c3c2a6655b" 00:12:57.726 ], 00:12:57.726 "product_name": "Malloc disk", 00:12:57.726 "block_size": 512, 00:12:57.726 "num_blocks": 65536, 00:12:57.726 "uuid": "70c48e78-a301-4cfb-b61f-76c3c2a6655b", 00:12:57.726 "assigned_rate_limits": { 00:12:57.726 "rw_ios_per_sec": 0, 00:12:57.726 "rw_mbytes_per_sec": 0, 00:12:57.726 "r_mbytes_per_sec": 0, 00:12:57.726 "w_mbytes_per_sec": 0 00:12:57.726 }, 00:12:57.726 "claimed": true, 00:12:57.726 "claim_type": "exclusive_write", 00:12:57.726 "zoned": false, 00:12:57.726 "supported_io_types": { 00:12:57.726 "read": true, 00:12:57.726 "write": true, 00:12:57.726 "unmap": true, 00:12:57.726 "flush": true, 00:12:57.726 "reset": true, 00:12:57.726 "nvme_admin": false, 00:12:57.726 "nvme_io": false, 00:12:57.726 "nvme_io_md": false, 00:12:57.726 "write_zeroes": true, 00:12:57.726 "zcopy": true, 00:12:57.726 "get_zone_info": false, 00:12:57.726 "zone_management": false, 00:12:57.726 "zone_append": false, 00:12:57.726 "compare": false, 00:12:57.726 "compare_and_write": false, 00:12:57.726 "abort": true, 00:12:57.726 "seek_hole": false, 00:12:57.726 "seek_data": false, 00:12:57.726 "copy": true, 00:12:57.726 "nvme_iov_md": false 00:12:57.726 }, 00:12:57.726 "memory_domains": [ 00:12:57.726 { 00:12:57.726 "dma_device_id": "system", 00:12:57.726 "dma_device_type": 1 00:12:57.726 }, 00:12:57.726 { 00:12:57.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.726 "dma_device_type": 2 00:12:57.726 } 00:12:57.726 ], 00:12:57.726 "driver_specific": {} 00:12:57.726 } 00:12:57.726 ] 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.726 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.726 "name": "Existed_Raid", 00:12:57.726 "uuid": "1251bc9a-b2bd-47ed-b90b-8290efaa3597", 00:12:57.726 "strip_size_kb": 0, 00:12:57.726 "state": "online", 00:12:57.726 "raid_level": "raid1", 00:12:57.726 "superblock": true, 00:12:57.726 "num_base_bdevs": 4, 00:12:57.726 "num_base_bdevs_discovered": 4, 00:12:57.726 "num_base_bdevs_operational": 4, 00:12:57.726 "base_bdevs_list": [ 00:12:57.726 { 00:12:57.726 "name": "BaseBdev1", 00:12:57.726 "uuid": "18b0077c-47c5-4632-b1fb-db5389a709c2", 00:12:57.726 "is_configured": true, 00:12:57.726 "data_offset": 2048, 00:12:57.726 "data_size": 63488 00:12:57.726 }, 00:12:57.726 { 00:12:57.727 "name": "BaseBdev2", 00:12:57.727 "uuid": "65ce75f0-4bf1-4941-8cf4-c8704bee686c", 00:12:57.727 "is_configured": true, 00:12:57.727 "data_offset": 2048, 00:12:57.727 "data_size": 63488 00:12:57.727 }, 00:12:57.727 { 00:12:57.727 "name": "BaseBdev3", 00:12:57.727 "uuid": "7f09883d-42fb-4c2c-aca4-a0e3e549af16", 00:12:57.727 "is_configured": true, 00:12:57.727 "data_offset": 2048, 00:12:57.727 "data_size": 63488 00:12:57.727 }, 00:12:57.727 { 00:12:57.727 "name": "BaseBdev4", 00:12:57.727 "uuid": "70c48e78-a301-4cfb-b61f-76c3c2a6655b", 00:12:57.727 "is_configured": true, 00:12:57.727 "data_offset": 2048, 00:12:57.727 "data_size": 63488 00:12:57.727 } 00:12:57.727 ] 00:12:57.727 }' 00:12:57.727 14:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.727 14:29:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.298 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.299 [2024-11-20 14:29:59.234280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:58.299 "name": "Existed_Raid", 00:12:58.299 "aliases": [ 00:12:58.299 "1251bc9a-b2bd-47ed-b90b-8290efaa3597" 00:12:58.299 ], 00:12:58.299 "product_name": "Raid Volume", 00:12:58.299 "block_size": 512, 00:12:58.299 "num_blocks": 63488, 00:12:58.299 "uuid": "1251bc9a-b2bd-47ed-b90b-8290efaa3597", 00:12:58.299 "assigned_rate_limits": { 00:12:58.299 "rw_ios_per_sec": 0, 00:12:58.299 "rw_mbytes_per_sec": 0, 00:12:58.299 "r_mbytes_per_sec": 0, 00:12:58.299 "w_mbytes_per_sec": 0 00:12:58.299 }, 00:12:58.299 "claimed": false, 00:12:58.299 "zoned": false, 00:12:58.299 "supported_io_types": { 00:12:58.299 "read": true, 00:12:58.299 "write": true, 00:12:58.299 "unmap": false, 00:12:58.299 "flush": false, 00:12:58.299 "reset": true, 00:12:58.299 "nvme_admin": false, 00:12:58.299 "nvme_io": false, 00:12:58.299 "nvme_io_md": false, 00:12:58.299 "write_zeroes": true, 00:12:58.299 "zcopy": false, 00:12:58.299 "get_zone_info": false, 00:12:58.299 "zone_management": false, 00:12:58.299 "zone_append": false, 00:12:58.299 "compare": false, 00:12:58.299 "compare_and_write": false, 00:12:58.299 "abort": false, 00:12:58.299 "seek_hole": false, 00:12:58.299 "seek_data": false, 00:12:58.299 "copy": false, 00:12:58.299 "nvme_iov_md": false 00:12:58.299 }, 00:12:58.299 "memory_domains": [ 00:12:58.299 { 00:12:58.299 "dma_device_id": "system", 00:12:58.299 "dma_device_type": 1 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.299 "dma_device_type": 2 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "dma_device_id": "system", 00:12:58.299 "dma_device_type": 1 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.299 "dma_device_type": 2 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "dma_device_id": "system", 00:12:58.299 "dma_device_type": 1 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.299 "dma_device_type": 2 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "dma_device_id": "system", 00:12:58.299 "dma_device_type": 1 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.299 "dma_device_type": 2 00:12:58.299 } 00:12:58.299 ], 00:12:58.299 "driver_specific": { 00:12:58.299 "raid": { 00:12:58.299 "uuid": "1251bc9a-b2bd-47ed-b90b-8290efaa3597", 00:12:58.299 "strip_size_kb": 0, 00:12:58.299 "state": "online", 00:12:58.299 "raid_level": "raid1", 00:12:58.299 "superblock": true, 00:12:58.299 "num_base_bdevs": 4, 00:12:58.299 "num_base_bdevs_discovered": 4, 00:12:58.299 "num_base_bdevs_operational": 4, 00:12:58.299 "base_bdevs_list": [ 00:12:58.299 { 00:12:58.299 "name": "BaseBdev1", 00:12:58.299 "uuid": "18b0077c-47c5-4632-b1fb-db5389a709c2", 00:12:58.299 "is_configured": true, 00:12:58.299 "data_offset": 2048, 00:12:58.299 "data_size": 63488 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "name": "BaseBdev2", 00:12:58.299 "uuid": "65ce75f0-4bf1-4941-8cf4-c8704bee686c", 00:12:58.299 "is_configured": true, 00:12:58.299 "data_offset": 2048, 00:12:58.299 "data_size": 63488 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "name": "BaseBdev3", 00:12:58.299 "uuid": "7f09883d-42fb-4c2c-aca4-a0e3e549af16", 00:12:58.299 "is_configured": true, 00:12:58.299 "data_offset": 2048, 00:12:58.299 "data_size": 63488 00:12:58.299 }, 00:12:58.299 { 00:12:58.299 "name": "BaseBdev4", 00:12:58.299 "uuid": "70c48e78-a301-4cfb-b61f-76c3c2a6655b", 00:12:58.299 "is_configured": true, 00:12:58.299 "data_offset": 2048, 00:12:58.299 "data_size": 63488 00:12:58.299 } 00:12:58.299 ] 00:12:58.299 } 00:12:58.299 } 00:12:58.299 }' 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:58.299 BaseBdev2 00:12:58.299 BaseBdev3 00:12:58.299 BaseBdev4' 00:12:58.299 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:58.588 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.589 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.589 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.589 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.589 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.589 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:58.589 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.589 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.589 [2024-11-20 14:29:59.602043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.847 "name": "Existed_Raid", 00:12:58.847 "uuid": "1251bc9a-b2bd-47ed-b90b-8290efaa3597", 00:12:58.847 "strip_size_kb": 0, 00:12:58.847 "state": "online", 00:12:58.847 "raid_level": "raid1", 00:12:58.847 "superblock": true, 00:12:58.847 "num_base_bdevs": 4, 00:12:58.847 "num_base_bdevs_discovered": 3, 00:12:58.847 "num_base_bdevs_operational": 3, 00:12:58.847 "base_bdevs_list": [ 00:12:58.847 { 00:12:58.847 "name": null, 00:12:58.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.847 "is_configured": false, 00:12:58.847 "data_offset": 0, 00:12:58.847 "data_size": 63488 00:12:58.847 }, 00:12:58.847 { 00:12:58.847 "name": "BaseBdev2", 00:12:58.847 "uuid": "65ce75f0-4bf1-4941-8cf4-c8704bee686c", 00:12:58.847 "is_configured": true, 00:12:58.847 "data_offset": 2048, 00:12:58.847 "data_size": 63488 00:12:58.847 }, 00:12:58.847 { 00:12:58.847 "name": "BaseBdev3", 00:12:58.847 "uuid": "7f09883d-42fb-4c2c-aca4-a0e3e549af16", 00:12:58.847 "is_configured": true, 00:12:58.847 "data_offset": 2048, 00:12:58.847 "data_size": 63488 00:12:58.847 }, 00:12:58.847 { 00:12:58.847 "name": "BaseBdev4", 00:12:58.847 "uuid": "70c48e78-a301-4cfb-b61f-76c3c2a6655b", 00:12:58.847 "is_configured": true, 00:12:58.847 "data_offset": 2048, 00:12:58.847 "data_size": 63488 00:12:58.847 } 00:12:58.847 ] 00:12:58.847 }' 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.847 14:29:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 [2024-11-20 14:30:00.245502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.412 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 [2024-11-20 14:30:00.386836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.671 [2024-11-20 14:30:00.536380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:59.671 [2024-11-20 14:30:00.536529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.671 [2024-11-20 14:30:00.627464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.671 [2024-11-20 14:30:00.627547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.671 [2024-11-20 14:30:00.627568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.671 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.929 BaseBdev2 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.929 [ 00:12:59.929 { 00:12:59.929 "name": "BaseBdev2", 00:12:59.929 "aliases": [ 00:12:59.929 "18d1a133-3e89-4e93-b48b-37dd3dd139db" 00:12:59.929 ], 00:12:59.929 "product_name": "Malloc disk", 00:12:59.929 "block_size": 512, 00:12:59.929 "num_blocks": 65536, 00:12:59.929 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:12:59.929 "assigned_rate_limits": { 00:12:59.929 "rw_ios_per_sec": 0, 00:12:59.929 "rw_mbytes_per_sec": 0, 00:12:59.929 "r_mbytes_per_sec": 0, 00:12:59.929 "w_mbytes_per_sec": 0 00:12:59.929 }, 00:12:59.929 "claimed": false, 00:12:59.929 "zoned": false, 00:12:59.929 "supported_io_types": { 00:12:59.929 "read": true, 00:12:59.929 "write": true, 00:12:59.929 "unmap": true, 00:12:59.929 "flush": true, 00:12:59.929 "reset": true, 00:12:59.929 "nvme_admin": false, 00:12:59.929 "nvme_io": false, 00:12:59.929 "nvme_io_md": false, 00:12:59.929 "write_zeroes": true, 00:12:59.929 "zcopy": true, 00:12:59.929 "get_zone_info": false, 00:12:59.929 "zone_management": false, 00:12:59.929 "zone_append": false, 00:12:59.929 "compare": false, 00:12:59.929 "compare_and_write": false, 00:12:59.929 "abort": true, 00:12:59.929 "seek_hole": false, 00:12:59.929 "seek_data": false, 00:12:59.929 "copy": true, 00:12:59.929 "nvme_iov_md": false 00:12:59.929 }, 00:12:59.929 "memory_domains": [ 00:12:59.929 { 00:12:59.929 "dma_device_id": "system", 00:12:59.929 "dma_device_type": 1 00:12:59.929 }, 00:12:59.929 { 00:12:59.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.929 "dma_device_type": 2 00:12:59.929 } 00:12:59.929 ], 00:12:59.929 "driver_specific": {} 00:12:59.929 } 00:12:59.929 ] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.929 BaseBdev3 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.929 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.929 [ 00:12:59.929 { 00:12:59.929 "name": "BaseBdev3", 00:12:59.929 "aliases": [ 00:12:59.929 "63c440b1-26d5-4763-a5e0-6454267b6705" 00:12:59.929 ], 00:12:59.929 "product_name": "Malloc disk", 00:12:59.929 "block_size": 512, 00:12:59.929 "num_blocks": 65536, 00:12:59.929 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:12:59.929 "assigned_rate_limits": { 00:12:59.929 "rw_ios_per_sec": 0, 00:12:59.929 "rw_mbytes_per_sec": 0, 00:12:59.929 "r_mbytes_per_sec": 0, 00:12:59.930 "w_mbytes_per_sec": 0 00:12:59.930 }, 00:12:59.930 "claimed": false, 00:12:59.930 "zoned": false, 00:12:59.930 "supported_io_types": { 00:12:59.930 "read": true, 00:12:59.930 "write": true, 00:12:59.930 "unmap": true, 00:12:59.930 "flush": true, 00:12:59.930 "reset": true, 00:12:59.930 "nvme_admin": false, 00:12:59.930 "nvme_io": false, 00:12:59.930 "nvme_io_md": false, 00:12:59.930 "write_zeroes": true, 00:12:59.930 "zcopy": true, 00:12:59.930 "get_zone_info": false, 00:12:59.930 "zone_management": false, 00:12:59.930 "zone_append": false, 00:12:59.930 "compare": false, 00:12:59.930 "compare_and_write": false, 00:12:59.930 "abort": true, 00:12:59.930 "seek_hole": false, 00:12:59.930 "seek_data": false, 00:12:59.930 "copy": true, 00:12:59.930 "nvme_iov_md": false 00:12:59.930 }, 00:12:59.930 "memory_domains": [ 00:12:59.930 { 00:12:59.930 "dma_device_id": "system", 00:12:59.930 "dma_device_type": 1 00:12:59.930 }, 00:12:59.930 { 00:12:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.930 "dma_device_type": 2 00:12:59.930 } 00:12:59.930 ], 00:12:59.930 "driver_specific": {} 00:12:59.930 } 00:12:59.930 ] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.930 BaseBdev4 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.930 [ 00:12:59.930 { 00:12:59.930 "name": "BaseBdev4", 00:12:59.930 "aliases": [ 00:12:59.930 "52a127ba-b826-44e6-8d10-3cf1e6918b23" 00:12:59.930 ], 00:12:59.930 "product_name": "Malloc disk", 00:12:59.930 "block_size": 512, 00:12:59.930 "num_blocks": 65536, 00:12:59.930 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:12:59.930 "assigned_rate_limits": { 00:12:59.930 "rw_ios_per_sec": 0, 00:12:59.930 "rw_mbytes_per_sec": 0, 00:12:59.930 "r_mbytes_per_sec": 0, 00:12:59.930 "w_mbytes_per_sec": 0 00:12:59.930 }, 00:12:59.930 "claimed": false, 00:12:59.930 "zoned": false, 00:12:59.930 "supported_io_types": { 00:12:59.930 "read": true, 00:12:59.930 "write": true, 00:12:59.930 "unmap": true, 00:12:59.930 "flush": true, 00:12:59.930 "reset": true, 00:12:59.930 "nvme_admin": false, 00:12:59.930 "nvme_io": false, 00:12:59.930 "nvme_io_md": false, 00:12:59.930 "write_zeroes": true, 00:12:59.930 "zcopy": true, 00:12:59.930 "get_zone_info": false, 00:12:59.930 "zone_management": false, 00:12:59.930 "zone_append": false, 00:12:59.930 "compare": false, 00:12:59.930 "compare_and_write": false, 00:12:59.930 "abort": true, 00:12:59.930 "seek_hole": false, 00:12:59.930 "seek_data": false, 00:12:59.930 "copy": true, 00:12:59.930 "nvme_iov_md": false 00:12:59.930 }, 00:12:59.930 "memory_domains": [ 00:12:59.930 { 00:12:59.930 "dma_device_id": "system", 00:12:59.930 "dma_device_type": 1 00:12:59.930 }, 00:12:59.930 { 00:12:59.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.930 "dma_device_type": 2 00:12:59.930 } 00:12:59.930 ], 00:12:59.930 "driver_specific": {} 00:12:59.930 } 00:12:59.930 ] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.930 [2024-11-20 14:30:00.934730] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.930 [2024-11-20 14:30:00.934962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.930 [2024-11-20 14:30:00.935122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.930 [2024-11-20 14:30:00.937926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.930 [2024-11-20 14:30:00.938118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.930 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.188 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.188 "name": "Existed_Raid", 00:13:00.188 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:00.188 "strip_size_kb": 0, 00:13:00.188 "state": "configuring", 00:13:00.188 "raid_level": "raid1", 00:13:00.188 "superblock": true, 00:13:00.188 "num_base_bdevs": 4, 00:13:00.188 "num_base_bdevs_discovered": 3, 00:13:00.188 "num_base_bdevs_operational": 4, 00:13:00.188 "base_bdevs_list": [ 00:13:00.188 { 00:13:00.188 "name": "BaseBdev1", 00:13:00.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.188 "is_configured": false, 00:13:00.188 "data_offset": 0, 00:13:00.188 "data_size": 0 00:13:00.188 }, 00:13:00.188 { 00:13:00.188 "name": "BaseBdev2", 00:13:00.188 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:00.188 "is_configured": true, 00:13:00.188 "data_offset": 2048, 00:13:00.188 "data_size": 63488 00:13:00.188 }, 00:13:00.188 { 00:13:00.188 "name": "BaseBdev3", 00:13:00.188 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:00.188 "is_configured": true, 00:13:00.188 "data_offset": 2048, 00:13:00.188 "data_size": 63488 00:13:00.188 }, 00:13:00.188 { 00:13:00.188 "name": "BaseBdev4", 00:13:00.188 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:00.188 "is_configured": true, 00:13:00.188 "data_offset": 2048, 00:13:00.188 "data_size": 63488 00:13:00.188 } 00:13:00.188 ] 00:13:00.188 }' 00:13:00.188 14:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.188 14:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.447 [2024-11-20 14:30:01.434826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.447 "name": "Existed_Raid", 00:13:00.447 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:00.447 "strip_size_kb": 0, 00:13:00.447 "state": "configuring", 00:13:00.447 "raid_level": "raid1", 00:13:00.447 "superblock": true, 00:13:00.447 "num_base_bdevs": 4, 00:13:00.447 "num_base_bdevs_discovered": 2, 00:13:00.447 "num_base_bdevs_operational": 4, 00:13:00.447 "base_bdevs_list": [ 00:13:00.447 { 00:13:00.447 "name": "BaseBdev1", 00:13:00.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.447 "is_configured": false, 00:13:00.447 "data_offset": 0, 00:13:00.447 "data_size": 0 00:13:00.447 }, 00:13:00.447 { 00:13:00.447 "name": null, 00:13:00.447 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:00.447 "is_configured": false, 00:13:00.447 "data_offset": 0, 00:13:00.447 "data_size": 63488 00:13:00.447 }, 00:13:00.447 { 00:13:00.447 "name": "BaseBdev3", 00:13:00.447 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:00.447 "is_configured": true, 00:13:00.447 "data_offset": 2048, 00:13:00.447 "data_size": 63488 00:13:00.447 }, 00:13:00.447 { 00:13:00.447 "name": "BaseBdev4", 00:13:00.447 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:00.447 "is_configured": true, 00:13:00.447 "data_offset": 2048, 00:13:00.447 "data_size": 63488 00:13:00.447 } 00:13:00.447 ] 00:13:00.447 }' 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.447 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.015 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:01.015 14:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.015 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.015 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.015 14:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.015 [2024-11-20 14:30:02.054333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.015 BaseBdev1 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.015 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.273 [ 00:13:01.273 { 00:13:01.273 "name": "BaseBdev1", 00:13:01.273 "aliases": [ 00:13:01.273 "989f02f5-4bd4-49e6-940b-34b3ce26ca8f" 00:13:01.274 ], 00:13:01.274 "product_name": "Malloc disk", 00:13:01.274 "block_size": 512, 00:13:01.274 "num_blocks": 65536, 00:13:01.274 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:01.274 "assigned_rate_limits": { 00:13:01.274 "rw_ios_per_sec": 0, 00:13:01.274 "rw_mbytes_per_sec": 0, 00:13:01.274 "r_mbytes_per_sec": 0, 00:13:01.274 "w_mbytes_per_sec": 0 00:13:01.274 }, 00:13:01.274 "claimed": true, 00:13:01.274 "claim_type": "exclusive_write", 00:13:01.274 "zoned": false, 00:13:01.274 "supported_io_types": { 00:13:01.274 "read": true, 00:13:01.274 "write": true, 00:13:01.274 "unmap": true, 00:13:01.274 "flush": true, 00:13:01.274 "reset": true, 00:13:01.274 "nvme_admin": false, 00:13:01.274 "nvme_io": false, 00:13:01.274 "nvme_io_md": false, 00:13:01.274 "write_zeroes": true, 00:13:01.274 "zcopy": true, 00:13:01.274 "get_zone_info": false, 00:13:01.274 "zone_management": false, 00:13:01.274 "zone_append": false, 00:13:01.274 "compare": false, 00:13:01.274 "compare_and_write": false, 00:13:01.274 "abort": true, 00:13:01.274 "seek_hole": false, 00:13:01.274 "seek_data": false, 00:13:01.274 "copy": true, 00:13:01.274 "nvme_iov_md": false 00:13:01.274 }, 00:13:01.274 "memory_domains": [ 00:13:01.274 { 00:13:01.274 "dma_device_id": "system", 00:13:01.274 "dma_device_type": 1 00:13:01.274 }, 00:13:01.274 { 00:13:01.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.274 "dma_device_type": 2 00:13:01.274 } 00:13:01.274 ], 00:13:01.274 "driver_specific": {} 00:13:01.274 } 00:13:01.274 ] 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.274 "name": "Existed_Raid", 00:13:01.274 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:01.274 "strip_size_kb": 0, 00:13:01.274 "state": "configuring", 00:13:01.274 "raid_level": "raid1", 00:13:01.274 "superblock": true, 00:13:01.274 "num_base_bdevs": 4, 00:13:01.274 "num_base_bdevs_discovered": 3, 00:13:01.274 "num_base_bdevs_operational": 4, 00:13:01.274 "base_bdevs_list": [ 00:13:01.274 { 00:13:01.274 "name": "BaseBdev1", 00:13:01.274 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:01.274 "is_configured": true, 00:13:01.274 "data_offset": 2048, 00:13:01.274 "data_size": 63488 00:13:01.274 }, 00:13:01.274 { 00:13:01.274 "name": null, 00:13:01.274 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:01.274 "is_configured": false, 00:13:01.274 "data_offset": 0, 00:13:01.274 "data_size": 63488 00:13:01.274 }, 00:13:01.274 { 00:13:01.274 "name": "BaseBdev3", 00:13:01.274 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:01.274 "is_configured": true, 00:13:01.274 "data_offset": 2048, 00:13:01.274 "data_size": 63488 00:13:01.274 }, 00:13:01.274 { 00:13:01.274 "name": "BaseBdev4", 00:13:01.274 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:01.274 "is_configured": true, 00:13:01.274 "data_offset": 2048, 00:13:01.274 "data_size": 63488 00:13:01.274 } 00:13:01.274 ] 00:13:01.274 }' 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.274 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.842 [2024-11-20 14:30:02.670650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.842 "name": "Existed_Raid", 00:13:01.842 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:01.842 "strip_size_kb": 0, 00:13:01.842 "state": "configuring", 00:13:01.842 "raid_level": "raid1", 00:13:01.842 "superblock": true, 00:13:01.842 "num_base_bdevs": 4, 00:13:01.842 "num_base_bdevs_discovered": 2, 00:13:01.842 "num_base_bdevs_operational": 4, 00:13:01.842 "base_bdevs_list": [ 00:13:01.842 { 00:13:01.842 "name": "BaseBdev1", 00:13:01.842 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:01.842 "is_configured": true, 00:13:01.842 "data_offset": 2048, 00:13:01.842 "data_size": 63488 00:13:01.842 }, 00:13:01.842 { 00:13:01.842 "name": null, 00:13:01.842 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:01.842 "is_configured": false, 00:13:01.842 "data_offset": 0, 00:13:01.842 "data_size": 63488 00:13:01.842 }, 00:13:01.842 { 00:13:01.842 "name": null, 00:13:01.842 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:01.842 "is_configured": false, 00:13:01.842 "data_offset": 0, 00:13:01.842 "data_size": 63488 00:13:01.842 }, 00:13:01.842 { 00:13:01.842 "name": "BaseBdev4", 00:13:01.842 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:01.842 "is_configured": true, 00:13:01.842 "data_offset": 2048, 00:13:01.842 "data_size": 63488 00:13:01.842 } 00:13:01.842 ] 00:13:01.842 }' 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.842 14:30:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 [2024-11-20 14:30:03.238807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.410 "name": "Existed_Raid", 00:13:02.410 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:02.410 "strip_size_kb": 0, 00:13:02.410 "state": "configuring", 00:13:02.410 "raid_level": "raid1", 00:13:02.410 "superblock": true, 00:13:02.410 "num_base_bdevs": 4, 00:13:02.410 "num_base_bdevs_discovered": 3, 00:13:02.410 "num_base_bdevs_operational": 4, 00:13:02.410 "base_bdevs_list": [ 00:13:02.410 { 00:13:02.410 "name": "BaseBdev1", 00:13:02.410 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:02.410 "is_configured": true, 00:13:02.410 "data_offset": 2048, 00:13:02.410 "data_size": 63488 00:13:02.410 }, 00:13:02.410 { 00:13:02.410 "name": null, 00:13:02.410 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:02.410 "is_configured": false, 00:13:02.410 "data_offset": 0, 00:13:02.410 "data_size": 63488 00:13:02.410 }, 00:13:02.410 { 00:13:02.410 "name": "BaseBdev3", 00:13:02.410 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:02.410 "is_configured": true, 00:13:02.410 "data_offset": 2048, 00:13:02.410 "data_size": 63488 00:13:02.410 }, 00:13:02.410 { 00:13:02.410 "name": "BaseBdev4", 00:13:02.410 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:02.410 "is_configured": true, 00:13:02.410 "data_offset": 2048, 00:13:02.410 "data_size": 63488 00:13:02.410 } 00:13:02.410 ] 00:13:02.410 }' 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.410 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.713 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.713 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:02.713 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.713 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.972 [2024-11-20 14:30:03.815029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.972 "name": "Existed_Raid", 00:13:02.972 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:02.972 "strip_size_kb": 0, 00:13:02.972 "state": "configuring", 00:13:02.972 "raid_level": "raid1", 00:13:02.972 "superblock": true, 00:13:02.972 "num_base_bdevs": 4, 00:13:02.972 "num_base_bdevs_discovered": 2, 00:13:02.972 "num_base_bdevs_operational": 4, 00:13:02.972 "base_bdevs_list": [ 00:13:02.972 { 00:13:02.972 "name": null, 00:13:02.972 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:02.972 "is_configured": false, 00:13:02.972 "data_offset": 0, 00:13:02.972 "data_size": 63488 00:13:02.972 }, 00:13:02.972 { 00:13:02.972 "name": null, 00:13:02.972 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:02.972 "is_configured": false, 00:13:02.972 "data_offset": 0, 00:13:02.972 "data_size": 63488 00:13:02.972 }, 00:13:02.972 { 00:13:02.972 "name": "BaseBdev3", 00:13:02.972 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:02.972 "is_configured": true, 00:13:02.972 "data_offset": 2048, 00:13:02.972 "data_size": 63488 00:13:02.972 }, 00:13:02.972 { 00:13:02.972 "name": "BaseBdev4", 00:13:02.972 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:02.972 "is_configured": true, 00:13:02.972 "data_offset": 2048, 00:13:02.972 "data_size": 63488 00:13:02.972 } 00:13:02.972 ] 00:13:02.972 }' 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.972 14:30:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.540 [2024-11-20 14:30:04.508260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.540 "name": "Existed_Raid", 00:13:03.540 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:03.540 "strip_size_kb": 0, 00:13:03.540 "state": "configuring", 00:13:03.540 "raid_level": "raid1", 00:13:03.540 "superblock": true, 00:13:03.540 "num_base_bdevs": 4, 00:13:03.540 "num_base_bdevs_discovered": 3, 00:13:03.540 "num_base_bdevs_operational": 4, 00:13:03.540 "base_bdevs_list": [ 00:13:03.540 { 00:13:03.540 "name": null, 00:13:03.540 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:03.540 "is_configured": false, 00:13:03.540 "data_offset": 0, 00:13:03.540 "data_size": 63488 00:13:03.540 }, 00:13:03.540 { 00:13:03.540 "name": "BaseBdev2", 00:13:03.540 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:03.540 "is_configured": true, 00:13:03.540 "data_offset": 2048, 00:13:03.540 "data_size": 63488 00:13:03.540 }, 00:13:03.540 { 00:13:03.540 "name": "BaseBdev3", 00:13:03.540 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:03.540 "is_configured": true, 00:13:03.540 "data_offset": 2048, 00:13:03.540 "data_size": 63488 00:13:03.540 }, 00:13:03.540 { 00:13:03.540 "name": "BaseBdev4", 00:13:03.540 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:03.540 "is_configured": true, 00:13:03.540 "data_offset": 2048, 00:13:03.540 "data_size": 63488 00:13:03.540 } 00:13:03.540 ] 00:13:03.540 }' 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.540 14:30:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.107 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 989f02f5-4bd4-49e6-940b-34b3ce26ca8f 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.366 [2024-11-20 14:30:05.212040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:04.366 [2024-11-20 14:30:05.212349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:04.366 [2024-11-20 14:30:05.212375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.366 [2024-11-20 14:30:05.212724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:04.366 NewBaseBdev 00:13:04.366 [2024-11-20 14:30:05.212934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:04.366 [2024-11-20 14:30:05.212951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:04.366 [2024-11-20 14:30:05.213119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.366 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.366 [ 00:13:04.366 { 00:13:04.366 "name": "NewBaseBdev", 00:13:04.366 "aliases": [ 00:13:04.366 "989f02f5-4bd4-49e6-940b-34b3ce26ca8f" 00:13:04.366 ], 00:13:04.366 "product_name": "Malloc disk", 00:13:04.366 "block_size": 512, 00:13:04.366 "num_blocks": 65536, 00:13:04.366 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:04.366 "assigned_rate_limits": { 00:13:04.366 "rw_ios_per_sec": 0, 00:13:04.366 "rw_mbytes_per_sec": 0, 00:13:04.366 "r_mbytes_per_sec": 0, 00:13:04.366 "w_mbytes_per_sec": 0 00:13:04.366 }, 00:13:04.366 "claimed": true, 00:13:04.366 "claim_type": "exclusive_write", 00:13:04.366 "zoned": false, 00:13:04.366 "supported_io_types": { 00:13:04.366 "read": true, 00:13:04.366 "write": true, 00:13:04.366 "unmap": true, 00:13:04.366 "flush": true, 00:13:04.366 "reset": true, 00:13:04.366 "nvme_admin": false, 00:13:04.366 "nvme_io": false, 00:13:04.366 "nvme_io_md": false, 00:13:04.366 "write_zeroes": true, 00:13:04.366 "zcopy": true, 00:13:04.366 "get_zone_info": false, 00:13:04.366 "zone_management": false, 00:13:04.366 "zone_append": false, 00:13:04.366 "compare": false, 00:13:04.366 "compare_and_write": false, 00:13:04.366 "abort": true, 00:13:04.366 "seek_hole": false, 00:13:04.366 "seek_data": false, 00:13:04.366 "copy": true, 00:13:04.366 "nvme_iov_md": false 00:13:04.366 }, 00:13:04.366 "memory_domains": [ 00:13:04.366 { 00:13:04.366 "dma_device_id": "system", 00:13:04.366 "dma_device_type": 1 00:13:04.366 }, 00:13:04.366 { 00:13:04.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.366 "dma_device_type": 2 00:13:04.366 } 00:13:04.367 ], 00:13:04.367 "driver_specific": {} 00:13:04.367 } 00:13:04.367 ] 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.367 "name": "Existed_Raid", 00:13:04.367 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:04.367 "strip_size_kb": 0, 00:13:04.367 "state": "online", 00:13:04.367 "raid_level": "raid1", 00:13:04.367 "superblock": true, 00:13:04.367 "num_base_bdevs": 4, 00:13:04.367 "num_base_bdevs_discovered": 4, 00:13:04.367 "num_base_bdevs_operational": 4, 00:13:04.367 "base_bdevs_list": [ 00:13:04.367 { 00:13:04.367 "name": "NewBaseBdev", 00:13:04.367 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 }, 00:13:04.367 { 00:13:04.367 "name": "BaseBdev2", 00:13:04.367 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 }, 00:13:04.367 { 00:13:04.367 "name": "BaseBdev3", 00:13:04.367 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 }, 00:13:04.367 { 00:13:04.367 "name": "BaseBdev4", 00:13:04.367 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:04.367 "is_configured": true, 00:13:04.367 "data_offset": 2048, 00:13:04.367 "data_size": 63488 00:13:04.367 } 00:13:04.367 ] 00:13:04.367 }' 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.367 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.933 [2024-11-20 14:30:05.764829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.933 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:04.933 "name": "Existed_Raid", 00:13:04.933 "aliases": [ 00:13:04.933 "c15d2159-7878-49c9-9144-e004250dd906" 00:13:04.933 ], 00:13:04.933 "product_name": "Raid Volume", 00:13:04.933 "block_size": 512, 00:13:04.933 "num_blocks": 63488, 00:13:04.933 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:04.933 "assigned_rate_limits": { 00:13:04.933 "rw_ios_per_sec": 0, 00:13:04.933 "rw_mbytes_per_sec": 0, 00:13:04.933 "r_mbytes_per_sec": 0, 00:13:04.933 "w_mbytes_per_sec": 0 00:13:04.933 }, 00:13:04.933 "claimed": false, 00:13:04.933 "zoned": false, 00:13:04.933 "supported_io_types": { 00:13:04.933 "read": true, 00:13:04.933 "write": true, 00:13:04.933 "unmap": false, 00:13:04.933 "flush": false, 00:13:04.933 "reset": true, 00:13:04.933 "nvme_admin": false, 00:13:04.933 "nvme_io": false, 00:13:04.933 "nvme_io_md": false, 00:13:04.933 "write_zeroes": true, 00:13:04.933 "zcopy": false, 00:13:04.933 "get_zone_info": false, 00:13:04.933 "zone_management": false, 00:13:04.933 "zone_append": false, 00:13:04.933 "compare": false, 00:13:04.933 "compare_and_write": false, 00:13:04.933 "abort": false, 00:13:04.933 "seek_hole": false, 00:13:04.933 "seek_data": false, 00:13:04.933 "copy": false, 00:13:04.933 "nvme_iov_md": false 00:13:04.933 }, 00:13:04.933 "memory_domains": [ 00:13:04.933 { 00:13:04.933 "dma_device_id": "system", 00:13:04.933 "dma_device_type": 1 00:13:04.933 }, 00:13:04.933 { 00:13:04.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.933 "dma_device_type": 2 00:13:04.933 }, 00:13:04.933 { 00:13:04.933 "dma_device_id": "system", 00:13:04.933 "dma_device_type": 1 00:13:04.933 }, 00:13:04.933 { 00:13:04.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.934 "dma_device_type": 2 00:13:04.934 }, 00:13:04.934 { 00:13:04.934 "dma_device_id": "system", 00:13:04.934 "dma_device_type": 1 00:13:04.934 }, 00:13:04.934 { 00:13:04.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.934 "dma_device_type": 2 00:13:04.934 }, 00:13:04.934 { 00:13:04.934 "dma_device_id": "system", 00:13:04.934 "dma_device_type": 1 00:13:04.934 }, 00:13:04.934 { 00:13:04.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.934 "dma_device_type": 2 00:13:04.934 } 00:13:04.934 ], 00:13:04.934 "driver_specific": { 00:13:04.934 "raid": { 00:13:04.934 "uuid": "c15d2159-7878-49c9-9144-e004250dd906", 00:13:04.934 "strip_size_kb": 0, 00:13:04.934 "state": "online", 00:13:04.934 "raid_level": "raid1", 00:13:04.934 "superblock": true, 00:13:04.934 "num_base_bdevs": 4, 00:13:04.934 "num_base_bdevs_discovered": 4, 00:13:04.934 "num_base_bdevs_operational": 4, 00:13:04.934 "base_bdevs_list": [ 00:13:04.934 { 00:13:04.934 "name": "NewBaseBdev", 00:13:04.934 "uuid": "989f02f5-4bd4-49e6-940b-34b3ce26ca8f", 00:13:04.934 "is_configured": true, 00:13:04.934 "data_offset": 2048, 00:13:04.934 "data_size": 63488 00:13:04.934 }, 00:13:04.934 { 00:13:04.934 "name": "BaseBdev2", 00:13:04.934 "uuid": "18d1a133-3e89-4e93-b48b-37dd3dd139db", 00:13:04.934 "is_configured": true, 00:13:04.934 "data_offset": 2048, 00:13:04.934 "data_size": 63488 00:13:04.934 }, 00:13:04.934 { 00:13:04.934 "name": "BaseBdev3", 00:13:04.934 "uuid": "63c440b1-26d5-4763-a5e0-6454267b6705", 00:13:04.934 "is_configured": true, 00:13:04.934 "data_offset": 2048, 00:13:04.934 "data_size": 63488 00:13:04.934 }, 00:13:04.934 { 00:13:04.934 "name": "BaseBdev4", 00:13:04.934 "uuid": "52a127ba-b826-44e6-8d10-3cf1e6918b23", 00:13:04.934 "is_configured": true, 00:13:04.934 "data_offset": 2048, 00:13:04.934 "data_size": 63488 00:13:04.934 } 00:13:04.934 ] 00:13:04.934 } 00:13:04.934 } 00:13:04.934 }' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:04.934 BaseBdev2 00:13:04.934 BaseBdev3 00:13:04.934 BaseBdev4' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:04.934 14:30:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.192 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.193 [2024-11-20 14:30:06.156435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:05.193 [2024-11-20 14:30:06.156584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.193 [2024-11-20 14:30:06.156825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.193 [2024-11-20 14:30:06.157310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.193 [2024-11-20 14:30:06.157478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74089 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74089 ']' 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74089 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74089 00:13:05.193 killing process with pid 74089 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74089' 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74089 00:13:05.193 [2024-11-20 14:30:06.198380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.193 14:30:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74089 00:13:05.760 [2024-11-20 14:30:06.581494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:06.695 14:30:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:06.695 00:13:06.695 real 0m13.122s 00:13:06.695 user 0m21.635s 00:13:06.695 sys 0m1.879s 00:13:06.695 ************************************ 00:13:06.695 END TEST raid_state_function_test_sb 00:13:06.695 ************************************ 00:13:06.695 14:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.695 14:30:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.029 14:30:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:07.029 14:30:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:07.029 14:30:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.029 14:30:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.029 ************************************ 00:13:07.029 START TEST raid_superblock_test 00:13:07.029 ************************************ 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74772 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74772 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74772 ']' 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.029 14:30:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.029 [2024-11-20 14:30:07.902945] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:13:07.029 [2024-11-20 14:30:07.903414] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74772 ] 00:13:07.288 [2024-11-20 14:30:08.093066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.288 [2024-11-20 14:30:08.232764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.546 [2024-11-20 14:30:08.453266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:07.546 [2024-11-20 14:30:08.453339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 malloc1 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 [2024-11-20 14:30:08.946910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:08.114 [2024-11-20 14:30:08.947211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.114 [2024-11-20 14:30:08.947307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:08.114 [2024-11-20 14:30:08.947565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.114 [2024-11-20 14:30:08.950858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.114 [2024-11-20 14:30:08.951024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:08.114 pt1 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 malloc2 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 [2024-11-20 14:30:09.009063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:08.114 [2024-11-20 14:30:09.009284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.114 [2024-11-20 14:30:09.009367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:08.114 [2024-11-20 14:30:09.009569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.114 [2024-11-20 14:30:09.012746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.114 [2024-11-20 14:30:09.012904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:08.114 pt2 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 malloc3 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 [2024-11-20 14:30:09.081321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:08.114 [2024-11-20 14:30:09.081517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.114 [2024-11-20 14:30:09.081599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:08.114 [2024-11-20 14:30:09.081862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.114 [2024-11-20 14:30:09.085156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.114 [2024-11-20 14:30:09.085376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:08.114 pt3 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 malloc4 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 [2024-11-20 14:30:09.142280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:08.114 [2024-11-20 14:30:09.142511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.114 [2024-11-20 14:30:09.142587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:08.114 [2024-11-20 14:30:09.142713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.114 [2024-11-20 14:30:09.145899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.114 [2024-11-20 14:30:09.146067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:08.114 pt4 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 [2024-11-20 14:30:09.154430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:08.114 [2024-11-20 14:30:09.157159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:08.114 [2024-11-20 14:30:09.157256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:08.114 [2024-11-20 14:30:09.157354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:08.114 [2024-11-20 14:30:09.157746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:08.114 [2024-11-20 14:30:09.157773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:08.114 [2024-11-20 14:30:09.158130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:08.114 [2024-11-20 14:30:09.158387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:08.114 [2024-11-20 14:30:09.158413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:08.114 [2024-11-20 14:30:09.158692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.114 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.115 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.373 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.373 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.373 "name": "raid_bdev1", 00:13:08.373 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:08.373 "strip_size_kb": 0, 00:13:08.373 "state": "online", 00:13:08.373 "raid_level": "raid1", 00:13:08.373 "superblock": true, 00:13:08.373 "num_base_bdevs": 4, 00:13:08.373 "num_base_bdevs_discovered": 4, 00:13:08.373 "num_base_bdevs_operational": 4, 00:13:08.373 "base_bdevs_list": [ 00:13:08.373 { 00:13:08.373 "name": "pt1", 00:13:08.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.373 "is_configured": true, 00:13:08.373 "data_offset": 2048, 00:13:08.373 "data_size": 63488 00:13:08.373 }, 00:13:08.373 { 00:13:08.373 "name": "pt2", 00:13:08.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.373 "is_configured": true, 00:13:08.373 "data_offset": 2048, 00:13:08.373 "data_size": 63488 00:13:08.373 }, 00:13:08.373 { 00:13:08.373 "name": "pt3", 00:13:08.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.373 "is_configured": true, 00:13:08.373 "data_offset": 2048, 00:13:08.373 "data_size": 63488 00:13:08.373 }, 00:13:08.373 { 00:13:08.373 "name": "pt4", 00:13:08.373 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:08.373 "is_configured": true, 00:13:08.373 "data_offset": 2048, 00:13:08.373 "data_size": 63488 00:13:08.373 } 00:13:08.373 ] 00:13:08.373 }' 00:13:08.373 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.373 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.630 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.630 [2024-11-20 14:30:09.671316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.887 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.887 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.887 "name": "raid_bdev1", 00:13:08.887 "aliases": [ 00:13:08.887 "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a" 00:13:08.887 ], 00:13:08.887 "product_name": "Raid Volume", 00:13:08.887 "block_size": 512, 00:13:08.887 "num_blocks": 63488, 00:13:08.887 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:08.887 "assigned_rate_limits": { 00:13:08.887 "rw_ios_per_sec": 0, 00:13:08.887 "rw_mbytes_per_sec": 0, 00:13:08.887 "r_mbytes_per_sec": 0, 00:13:08.887 "w_mbytes_per_sec": 0 00:13:08.887 }, 00:13:08.887 "claimed": false, 00:13:08.887 "zoned": false, 00:13:08.887 "supported_io_types": { 00:13:08.887 "read": true, 00:13:08.887 "write": true, 00:13:08.887 "unmap": false, 00:13:08.887 "flush": false, 00:13:08.887 "reset": true, 00:13:08.887 "nvme_admin": false, 00:13:08.887 "nvme_io": false, 00:13:08.887 "nvme_io_md": false, 00:13:08.887 "write_zeroes": true, 00:13:08.887 "zcopy": false, 00:13:08.887 "get_zone_info": false, 00:13:08.887 "zone_management": false, 00:13:08.887 "zone_append": false, 00:13:08.887 "compare": false, 00:13:08.887 "compare_and_write": false, 00:13:08.887 "abort": false, 00:13:08.887 "seek_hole": false, 00:13:08.887 "seek_data": false, 00:13:08.887 "copy": false, 00:13:08.887 "nvme_iov_md": false 00:13:08.887 }, 00:13:08.887 "memory_domains": [ 00:13:08.887 { 00:13:08.887 "dma_device_id": "system", 00:13:08.887 "dma_device_type": 1 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.887 "dma_device_type": 2 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "dma_device_id": "system", 00:13:08.887 "dma_device_type": 1 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.887 "dma_device_type": 2 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "dma_device_id": "system", 00:13:08.887 "dma_device_type": 1 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.887 "dma_device_type": 2 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "dma_device_id": "system", 00:13:08.887 "dma_device_type": 1 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.887 "dma_device_type": 2 00:13:08.887 } 00:13:08.887 ], 00:13:08.887 "driver_specific": { 00:13:08.887 "raid": { 00:13:08.887 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:08.887 "strip_size_kb": 0, 00:13:08.887 "state": "online", 00:13:08.887 "raid_level": "raid1", 00:13:08.887 "superblock": true, 00:13:08.887 "num_base_bdevs": 4, 00:13:08.887 "num_base_bdevs_discovered": 4, 00:13:08.887 "num_base_bdevs_operational": 4, 00:13:08.887 "base_bdevs_list": [ 00:13:08.887 { 00:13:08.887 "name": "pt1", 00:13:08.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:08.887 "is_configured": true, 00:13:08.887 "data_offset": 2048, 00:13:08.887 "data_size": 63488 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "name": "pt2", 00:13:08.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:08.887 "is_configured": true, 00:13:08.887 "data_offset": 2048, 00:13:08.887 "data_size": 63488 00:13:08.887 }, 00:13:08.887 { 00:13:08.887 "name": "pt3", 00:13:08.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:08.887 "is_configured": true, 00:13:08.887 "data_offset": 2048, 00:13:08.888 "data_size": 63488 00:13:08.888 }, 00:13:08.888 { 00:13:08.888 "name": "pt4", 00:13:08.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:08.888 "is_configured": true, 00:13:08.888 "data_offset": 2048, 00:13:08.888 "data_size": 63488 00:13:08.888 } 00:13:08.888 ] 00:13:08.888 } 00:13:08.888 } 00:13:08.888 }' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:08.888 pt2 00:13:08.888 pt3 00:13:08.888 pt4' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.888 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 14:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 [2024-11-20 14:30:10.023235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5b1ad43a-e28f-4c3d-8084-c3b3b04a398a 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5b1ad43a-e28f-4c3d-8084-c3b3b04a398a ']' 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 [2024-11-20 14:30:10.066901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.146 [2024-11-20 14:30:10.067086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.146 [2024-11-20 14:30:10.067317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.146 [2024-11-20 14:30:10.067460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.146 [2024-11-20 14:30:10.067503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.146 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.405 [2024-11-20 14:30:10.222960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:09.405 [2024-11-20 14:30:10.225842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:09.405 [2024-11-20 14:30:10.225922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:09.405 [2024-11-20 14:30:10.225983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:09.405 [2024-11-20 14:30:10.226057] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:09.405 [2024-11-20 14:30:10.226132] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:09.405 [2024-11-20 14:30:10.226172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:09.405 [2024-11-20 14:30:10.226205] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:09.405 [2024-11-20 14:30:10.226227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.405 [2024-11-20 14:30:10.226244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:09.405 request: 00:13:09.405 { 00:13:09.405 "name": "raid_bdev1", 00:13:09.405 "raid_level": "raid1", 00:13:09.405 "base_bdevs": [ 00:13:09.405 "malloc1", 00:13:09.405 "malloc2", 00:13:09.405 "malloc3", 00:13:09.405 "malloc4" 00:13:09.405 ], 00:13:09.405 "superblock": false, 00:13:09.405 "method": "bdev_raid_create", 00:13:09.405 "req_id": 1 00:13:09.405 } 00:13:09.405 Got JSON-RPC error response 00:13:09.405 response: 00:13:09.405 { 00:13:09.405 "code": -17, 00:13:09.405 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:09.405 } 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.405 [2024-11-20 14:30:10.287027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:09.405 [2024-11-20 14:30:10.287100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.405 [2024-11-20 14:30:10.287123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:09.405 [2024-11-20 14:30:10.287140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.405 [2024-11-20 14:30:10.290217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.405 [2024-11-20 14:30:10.290299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:09.405 [2024-11-20 14:30:10.290407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:09.405 [2024-11-20 14:30:10.290492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:09.405 pt1 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.405 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.405 "name": "raid_bdev1", 00:13:09.405 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:09.405 "strip_size_kb": 0, 00:13:09.405 "state": "configuring", 00:13:09.405 "raid_level": "raid1", 00:13:09.405 "superblock": true, 00:13:09.405 "num_base_bdevs": 4, 00:13:09.405 "num_base_bdevs_discovered": 1, 00:13:09.405 "num_base_bdevs_operational": 4, 00:13:09.405 "base_bdevs_list": [ 00:13:09.405 { 00:13:09.405 "name": "pt1", 00:13:09.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:09.405 "is_configured": true, 00:13:09.405 "data_offset": 2048, 00:13:09.405 "data_size": 63488 00:13:09.405 }, 00:13:09.405 { 00:13:09.405 "name": null, 00:13:09.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.405 "is_configured": false, 00:13:09.405 "data_offset": 2048, 00:13:09.405 "data_size": 63488 00:13:09.405 }, 00:13:09.405 { 00:13:09.405 "name": null, 00:13:09.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.405 "is_configured": false, 00:13:09.406 "data_offset": 2048, 00:13:09.406 "data_size": 63488 00:13:09.406 }, 00:13:09.406 { 00:13:09.406 "name": null, 00:13:09.406 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:09.406 "is_configured": false, 00:13:09.406 "data_offset": 2048, 00:13:09.406 "data_size": 63488 00:13:09.406 } 00:13:09.406 ] 00:13:09.406 }' 00:13:09.406 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.406 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.972 [2024-11-20 14:30:10.811321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:09.972 [2024-11-20 14:30:10.811592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.972 [2024-11-20 14:30:10.811649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:09.972 [2024-11-20 14:30:10.811698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.972 [2024-11-20 14:30:10.812384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.972 [2024-11-20 14:30:10.812422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:09.972 [2024-11-20 14:30:10.812527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:09.972 [2024-11-20 14:30:10.812565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:09.972 pt2 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.972 [2024-11-20 14:30:10.819283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.972 "name": "raid_bdev1", 00:13:09.972 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:09.972 "strip_size_kb": 0, 00:13:09.972 "state": "configuring", 00:13:09.972 "raid_level": "raid1", 00:13:09.972 "superblock": true, 00:13:09.972 "num_base_bdevs": 4, 00:13:09.972 "num_base_bdevs_discovered": 1, 00:13:09.972 "num_base_bdevs_operational": 4, 00:13:09.972 "base_bdevs_list": [ 00:13:09.972 { 00:13:09.972 "name": "pt1", 00:13:09.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:09.972 "is_configured": true, 00:13:09.972 "data_offset": 2048, 00:13:09.972 "data_size": 63488 00:13:09.972 }, 00:13:09.972 { 00:13:09.972 "name": null, 00:13:09.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:09.972 "is_configured": false, 00:13:09.972 "data_offset": 0, 00:13:09.972 "data_size": 63488 00:13:09.972 }, 00:13:09.972 { 00:13:09.972 "name": null, 00:13:09.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:09.972 "is_configured": false, 00:13:09.972 "data_offset": 2048, 00:13:09.972 "data_size": 63488 00:13:09.972 }, 00:13:09.972 { 00:13:09.972 "name": null, 00:13:09.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:09.972 "is_configured": false, 00:13:09.972 "data_offset": 2048, 00:13:09.972 "data_size": 63488 00:13:09.972 } 00:13:09.972 ] 00:13:09.972 }' 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.972 14:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.540 [2024-11-20 14:30:11.335449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.540 [2024-11-20 14:30:11.335559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.540 [2024-11-20 14:30:11.335605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:10.540 [2024-11-20 14:30:11.335620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.540 [2024-11-20 14:30:11.336331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.540 [2024-11-20 14:30:11.336506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.540 [2024-11-20 14:30:11.336648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:10.540 [2024-11-20 14:30:11.336716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.540 pt2 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.540 [2024-11-20 14:30:11.347399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:10.540 [2024-11-20 14:30:11.347456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.540 [2024-11-20 14:30:11.347483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:10.540 [2024-11-20 14:30:11.347496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.540 [2024-11-20 14:30:11.348021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.540 [2024-11-20 14:30:11.348068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:10.540 [2024-11-20 14:30:11.348189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:10.540 [2024-11-20 14:30:11.348215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:10.540 pt3 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.540 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.540 [2024-11-20 14:30:11.355379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:10.540 [2024-11-20 14:30:11.355432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.540 [2024-11-20 14:30:11.355459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:10.540 [2024-11-20 14:30:11.355473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.540 [2024-11-20 14:30:11.355963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.540 [2024-11-20 14:30:11.355996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:10.540 [2024-11-20 14:30:11.356090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:10.541 [2024-11-20 14:30:11.356132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:10.541 [2024-11-20 14:30:11.356316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:10.541 [2024-11-20 14:30:11.356333] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.541 [2024-11-20 14:30:11.356687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:10.541 [2024-11-20 14:30:11.356888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:10.541 [2024-11-20 14:30:11.356909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:10.541 [2024-11-20 14:30:11.357070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.541 pt4 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.541 "name": "raid_bdev1", 00:13:10.541 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:10.541 "strip_size_kb": 0, 00:13:10.541 "state": "online", 00:13:10.541 "raid_level": "raid1", 00:13:10.541 "superblock": true, 00:13:10.541 "num_base_bdevs": 4, 00:13:10.541 "num_base_bdevs_discovered": 4, 00:13:10.541 "num_base_bdevs_operational": 4, 00:13:10.541 "base_bdevs_list": [ 00:13:10.541 { 00:13:10.541 "name": "pt1", 00:13:10.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.541 "is_configured": true, 00:13:10.541 "data_offset": 2048, 00:13:10.541 "data_size": 63488 00:13:10.541 }, 00:13:10.541 { 00:13:10.541 "name": "pt2", 00:13:10.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.541 "is_configured": true, 00:13:10.541 "data_offset": 2048, 00:13:10.541 "data_size": 63488 00:13:10.541 }, 00:13:10.541 { 00:13:10.541 "name": "pt3", 00:13:10.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.541 "is_configured": true, 00:13:10.541 "data_offset": 2048, 00:13:10.541 "data_size": 63488 00:13:10.541 }, 00:13:10.541 { 00:13:10.541 "name": "pt4", 00:13:10.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.541 "is_configured": true, 00:13:10.541 "data_offset": 2048, 00:13:10.541 "data_size": 63488 00:13:10.541 } 00:13:10.541 ] 00:13:10.541 }' 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.541 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.109 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.110 [2024-11-20 14:30:11.876209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.110 "name": "raid_bdev1", 00:13:11.110 "aliases": [ 00:13:11.110 "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a" 00:13:11.110 ], 00:13:11.110 "product_name": "Raid Volume", 00:13:11.110 "block_size": 512, 00:13:11.110 "num_blocks": 63488, 00:13:11.110 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:11.110 "assigned_rate_limits": { 00:13:11.110 "rw_ios_per_sec": 0, 00:13:11.110 "rw_mbytes_per_sec": 0, 00:13:11.110 "r_mbytes_per_sec": 0, 00:13:11.110 "w_mbytes_per_sec": 0 00:13:11.110 }, 00:13:11.110 "claimed": false, 00:13:11.110 "zoned": false, 00:13:11.110 "supported_io_types": { 00:13:11.110 "read": true, 00:13:11.110 "write": true, 00:13:11.110 "unmap": false, 00:13:11.110 "flush": false, 00:13:11.110 "reset": true, 00:13:11.110 "nvme_admin": false, 00:13:11.110 "nvme_io": false, 00:13:11.110 "nvme_io_md": false, 00:13:11.110 "write_zeroes": true, 00:13:11.110 "zcopy": false, 00:13:11.110 "get_zone_info": false, 00:13:11.110 "zone_management": false, 00:13:11.110 "zone_append": false, 00:13:11.110 "compare": false, 00:13:11.110 "compare_and_write": false, 00:13:11.110 "abort": false, 00:13:11.110 "seek_hole": false, 00:13:11.110 "seek_data": false, 00:13:11.110 "copy": false, 00:13:11.110 "nvme_iov_md": false 00:13:11.110 }, 00:13:11.110 "memory_domains": [ 00:13:11.110 { 00:13:11.110 "dma_device_id": "system", 00:13:11.110 "dma_device_type": 1 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.110 "dma_device_type": 2 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "dma_device_id": "system", 00:13:11.110 "dma_device_type": 1 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.110 "dma_device_type": 2 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "dma_device_id": "system", 00:13:11.110 "dma_device_type": 1 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.110 "dma_device_type": 2 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "dma_device_id": "system", 00:13:11.110 "dma_device_type": 1 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.110 "dma_device_type": 2 00:13:11.110 } 00:13:11.110 ], 00:13:11.110 "driver_specific": { 00:13:11.110 "raid": { 00:13:11.110 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:11.110 "strip_size_kb": 0, 00:13:11.110 "state": "online", 00:13:11.110 "raid_level": "raid1", 00:13:11.110 "superblock": true, 00:13:11.110 "num_base_bdevs": 4, 00:13:11.110 "num_base_bdevs_discovered": 4, 00:13:11.110 "num_base_bdevs_operational": 4, 00:13:11.110 "base_bdevs_list": [ 00:13:11.110 { 00:13:11.110 "name": "pt1", 00:13:11.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.110 "is_configured": true, 00:13:11.110 "data_offset": 2048, 00:13:11.110 "data_size": 63488 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "name": "pt2", 00:13:11.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.110 "is_configured": true, 00:13:11.110 "data_offset": 2048, 00:13:11.110 "data_size": 63488 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "name": "pt3", 00:13:11.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.110 "is_configured": true, 00:13:11.110 "data_offset": 2048, 00:13:11.110 "data_size": 63488 00:13:11.110 }, 00:13:11.110 { 00:13:11.110 "name": "pt4", 00:13:11.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.110 "is_configured": true, 00:13:11.110 "data_offset": 2048, 00:13:11.110 "data_size": 63488 00:13:11.110 } 00:13:11.110 ] 00:13:11.110 } 00:13:11.110 } 00:13:11.110 }' 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:11.110 pt2 00:13:11.110 pt3 00:13:11.110 pt4' 00:13:11.110 14:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.110 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 [2024-11-20 14:30:12.244152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5b1ad43a-e28f-4c3d-8084-c3b3b04a398a '!=' 5b1ad43a-e28f-4c3d-8084-c3b3b04a398a ']' 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 [2024-11-20 14:30:12.287906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.369 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.370 "name": "raid_bdev1", 00:13:11.370 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:11.370 "strip_size_kb": 0, 00:13:11.370 "state": "online", 00:13:11.370 "raid_level": "raid1", 00:13:11.370 "superblock": true, 00:13:11.370 "num_base_bdevs": 4, 00:13:11.370 "num_base_bdevs_discovered": 3, 00:13:11.370 "num_base_bdevs_operational": 3, 00:13:11.370 "base_bdevs_list": [ 00:13:11.370 { 00:13:11.370 "name": null, 00:13:11.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.370 "is_configured": false, 00:13:11.370 "data_offset": 0, 00:13:11.370 "data_size": 63488 00:13:11.370 }, 00:13:11.370 { 00:13:11.370 "name": "pt2", 00:13:11.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.370 "is_configured": true, 00:13:11.370 "data_offset": 2048, 00:13:11.370 "data_size": 63488 00:13:11.370 }, 00:13:11.370 { 00:13:11.370 "name": "pt3", 00:13:11.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.370 "is_configured": true, 00:13:11.370 "data_offset": 2048, 00:13:11.370 "data_size": 63488 00:13:11.370 }, 00:13:11.370 { 00:13:11.370 "name": "pt4", 00:13:11.370 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.370 "is_configured": true, 00:13:11.370 "data_offset": 2048, 00:13:11.370 "data_size": 63488 00:13:11.370 } 00:13:11.370 ] 00:13:11.370 }' 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.370 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.955 [2024-11-20 14:30:12.824114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.955 [2024-11-20 14:30:12.824179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.955 [2024-11-20 14:30:12.824282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.955 [2024-11-20 14:30:12.824404] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.955 [2024-11-20 14:30:12.824421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:11.955 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.956 [2024-11-20 14:30:12.920160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.956 [2024-11-20 14:30:12.920223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.956 [2024-11-20 14:30:12.920253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:11.956 [2024-11-20 14:30:12.920268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.956 [2024-11-20 14:30:12.923645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.956 [2024-11-20 14:30:12.923695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.956 [2024-11-20 14:30:12.923801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:11.956 [2024-11-20 14:30:12.923863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.956 pt2 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.956 "name": "raid_bdev1", 00:13:11.956 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:11.956 "strip_size_kb": 0, 00:13:11.956 "state": "configuring", 00:13:11.956 "raid_level": "raid1", 00:13:11.956 "superblock": true, 00:13:11.956 "num_base_bdevs": 4, 00:13:11.956 "num_base_bdevs_discovered": 1, 00:13:11.956 "num_base_bdevs_operational": 3, 00:13:11.956 "base_bdevs_list": [ 00:13:11.956 { 00:13:11.956 "name": null, 00:13:11.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.956 "is_configured": false, 00:13:11.956 "data_offset": 2048, 00:13:11.956 "data_size": 63488 00:13:11.956 }, 00:13:11.956 { 00:13:11.956 "name": "pt2", 00:13:11.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.956 "is_configured": true, 00:13:11.956 "data_offset": 2048, 00:13:11.956 "data_size": 63488 00:13:11.956 }, 00:13:11.956 { 00:13:11.956 "name": null, 00:13:11.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.956 "is_configured": false, 00:13:11.956 "data_offset": 2048, 00:13:11.956 "data_size": 63488 00:13:11.956 }, 00:13:11.956 { 00:13:11.956 "name": null, 00:13:11.956 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.956 "is_configured": false, 00:13:11.956 "data_offset": 2048, 00:13:11.956 "data_size": 63488 00:13:11.956 } 00:13:11.956 ] 00:13:11.956 }' 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.956 14:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 [2024-11-20 14:30:13.456385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.549 [2024-11-20 14:30:13.456477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.549 [2024-11-20 14:30:13.456512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:12.549 [2024-11-20 14:30:13.456528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.549 [2024-11-20 14:30:13.457170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.549 [2024-11-20 14:30:13.457196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.549 [2024-11-20 14:30:13.457312] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:12.549 [2024-11-20 14:30:13.457346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.549 pt3 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.549 "name": "raid_bdev1", 00:13:12.549 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:12.549 "strip_size_kb": 0, 00:13:12.549 "state": "configuring", 00:13:12.549 "raid_level": "raid1", 00:13:12.549 "superblock": true, 00:13:12.549 "num_base_bdevs": 4, 00:13:12.549 "num_base_bdevs_discovered": 2, 00:13:12.549 "num_base_bdevs_operational": 3, 00:13:12.549 "base_bdevs_list": [ 00:13:12.549 { 00:13:12.549 "name": null, 00:13:12.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.549 "is_configured": false, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": "pt2", 00:13:12.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.549 "is_configured": true, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": "pt3", 00:13:12.549 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.549 "is_configured": true, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 }, 00:13:12.549 { 00:13:12.549 "name": null, 00:13:12.549 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.549 "is_configured": false, 00:13:12.549 "data_offset": 2048, 00:13:12.549 "data_size": 63488 00:13:12.549 } 00:13:12.549 ] 00:13:12.549 }' 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.549 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.117 [2024-11-20 14:30:13.968584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:13.117 [2024-11-20 14:30:13.968696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.117 [2024-11-20 14:30:13.968737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:13.117 [2024-11-20 14:30:13.968754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.117 [2024-11-20 14:30:13.969399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.117 [2024-11-20 14:30:13.969428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:13.117 [2024-11-20 14:30:13.969539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:13.117 [2024-11-20 14:30:13.969574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:13.117 [2024-11-20 14:30:13.969768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:13.117 [2024-11-20 14:30:13.969789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:13.117 [2024-11-20 14:30:13.970110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:13.117 [2024-11-20 14:30:13.970307] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:13.117 [2024-11-20 14:30:13.970330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:13.117 [2024-11-20 14:30:13.970511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.117 pt4 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.117 14:30:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.117 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.117 "name": "raid_bdev1", 00:13:13.117 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:13.117 "strip_size_kb": 0, 00:13:13.117 "state": "online", 00:13:13.117 "raid_level": "raid1", 00:13:13.117 "superblock": true, 00:13:13.117 "num_base_bdevs": 4, 00:13:13.117 "num_base_bdevs_discovered": 3, 00:13:13.117 "num_base_bdevs_operational": 3, 00:13:13.117 "base_bdevs_list": [ 00:13:13.117 { 00:13:13.117 "name": null, 00:13:13.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.117 "is_configured": false, 00:13:13.117 "data_offset": 2048, 00:13:13.117 "data_size": 63488 00:13:13.117 }, 00:13:13.117 { 00:13:13.117 "name": "pt2", 00:13:13.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.118 "is_configured": true, 00:13:13.118 "data_offset": 2048, 00:13:13.118 "data_size": 63488 00:13:13.118 }, 00:13:13.118 { 00:13:13.118 "name": "pt3", 00:13:13.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.118 "is_configured": true, 00:13:13.118 "data_offset": 2048, 00:13:13.118 "data_size": 63488 00:13:13.118 }, 00:13:13.118 { 00:13:13.118 "name": "pt4", 00:13:13.118 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.118 "is_configured": true, 00:13:13.118 "data_offset": 2048, 00:13:13.118 "data_size": 63488 00:13:13.118 } 00:13:13.118 ] 00:13:13.118 }' 00:13:13.118 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.118 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.684 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.684 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.684 [2024-11-20 14:30:14.504754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.684 [2024-11-20 14:30:14.504796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.684 [2024-11-20 14:30:14.504901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.685 [2024-11-20 14:30:14.505014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.685 [2024-11-20 14:30:14.505036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.685 [2024-11-20 14:30:14.576782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:13.685 [2024-11-20 14:30:14.576855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.685 [2024-11-20 14:30:14.576882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:13.685 [2024-11-20 14:30:14.576902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.685 [2024-11-20 14:30:14.580132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.685 [2024-11-20 14:30:14.580194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:13.685 [2024-11-20 14:30:14.580324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:13.685 [2024-11-20 14:30:14.580401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:13.685 [2024-11-20 14:30:14.580617] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:13.685 [2024-11-20 14:30:14.580663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.685 [2024-11-20 14:30:14.580686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:13.685 [2024-11-20 14:30:14.580764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.685 [2024-11-20 14:30:14.580913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.685 pt1 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.685 "name": "raid_bdev1", 00:13:13.685 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:13.685 "strip_size_kb": 0, 00:13:13.685 "state": "configuring", 00:13:13.685 "raid_level": "raid1", 00:13:13.685 "superblock": true, 00:13:13.685 "num_base_bdevs": 4, 00:13:13.685 "num_base_bdevs_discovered": 2, 00:13:13.685 "num_base_bdevs_operational": 3, 00:13:13.685 "base_bdevs_list": [ 00:13:13.685 { 00:13:13.685 "name": null, 00:13:13.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.685 "is_configured": false, 00:13:13.685 "data_offset": 2048, 00:13:13.685 "data_size": 63488 00:13:13.685 }, 00:13:13.685 { 00:13:13.685 "name": "pt2", 00:13:13.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.685 "is_configured": true, 00:13:13.685 "data_offset": 2048, 00:13:13.685 "data_size": 63488 00:13:13.685 }, 00:13:13.685 { 00:13:13.685 "name": "pt3", 00:13:13.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.685 "is_configured": true, 00:13:13.685 "data_offset": 2048, 00:13:13.685 "data_size": 63488 00:13:13.685 }, 00:13:13.685 { 00:13:13.685 "name": null, 00:13:13.685 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.685 "is_configured": false, 00:13:13.685 "data_offset": 2048, 00:13:13.685 "data_size": 63488 00:13:13.685 } 00:13:13.685 ] 00:13:13.685 }' 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.685 14:30:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.252 [2024-11-20 14:30:15.149219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:14.252 [2024-11-20 14:30:15.149341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.252 [2024-11-20 14:30:15.149375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:14.252 [2024-11-20 14:30:15.149389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.252 [2024-11-20 14:30:15.150037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.252 [2024-11-20 14:30:15.150063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:14.252 [2024-11-20 14:30:15.150169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:14.252 [2024-11-20 14:30:15.150203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:14.252 [2024-11-20 14:30:15.150418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:14.252 [2024-11-20 14:30:15.150433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:14.252 [2024-11-20 14:30:15.150825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:14.252 [2024-11-20 14:30:15.151010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:14.252 [2024-11-20 14:30:15.151046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:14.252 [2024-11-20 14:30:15.151267] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.252 pt4 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.252 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.253 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.253 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.253 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.253 "name": "raid_bdev1", 00:13:14.253 "uuid": "5b1ad43a-e28f-4c3d-8084-c3b3b04a398a", 00:13:14.253 "strip_size_kb": 0, 00:13:14.253 "state": "online", 00:13:14.253 "raid_level": "raid1", 00:13:14.253 "superblock": true, 00:13:14.253 "num_base_bdevs": 4, 00:13:14.253 "num_base_bdevs_discovered": 3, 00:13:14.253 "num_base_bdevs_operational": 3, 00:13:14.253 "base_bdevs_list": [ 00:13:14.253 { 00:13:14.253 "name": null, 00:13:14.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.253 "is_configured": false, 00:13:14.253 "data_offset": 2048, 00:13:14.253 "data_size": 63488 00:13:14.253 }, 00:13:14.253 { 00:13:14.253 "name": "pt2", 00:13:14.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.253 "is_configured": true, 00:13:14.253 "data_offset": 2048, 00:13:14.253 "data_size": 63488 00:13:14.253 }, 00:13:14.253 { 00:13:14.253 "name": "pt3", 00:13:14.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.253 "is_configured": true, 00:13:14.253 "data_offset": 2048, 00:13:14.253 "data_size": 63488 00:13:14.253 }, 00:13:14.253 { 00:13:14.253 "name": "pt4", 00:13:14.253 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:14.253 "is_configured": true, 00:13:14.253 "data_offset": 2048, 00:13:14.253 "data_size": 63488 00:13:14.253 } 00:13:14.253 ] 00:13:14.253 }' 00:13:14.253 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.253 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.819 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:14.819 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:14.819 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.819 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.819 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.819 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:14.819 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.820 [2024-11-20 14:30:15.713926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5b1ad43a-e28f-4c3d-8084-c3b3b04a398a '!=' 5b1ad43a-e28f-4c3d-8084-c3b3b04a398a ']' 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74772 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74772 ']' 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74772 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74772 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.820 killing process with pid 74772 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74772' 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74772 00:13:14.820 [2024-11-20 14:30:15.787404] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.820 [2024-11-20 14:30:15.787565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.820 14:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74772 00:13:14.820 [2024-11-20 14:30:15.787701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.820 [2024-11-20 14:30:15.787725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:15.385 [2024-11-20 14:30:16.154786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.320 14:30:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:16.320 00:13:16.320 real 0m9.481s 00:13:16.320 user 0m15.469s 00:13:16.320 sys 0m1.428s 00:13:16.320 14:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.320 14:30:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.320 ************************************ 00:13:16.320 END TEST raid_superblock_test 00:13:16.320 ************************************ 00:13:16.320 14:30:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:16.320 14:30:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:16.320 14:30:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.320 14:30:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.320 ************************************ 00:13:16.320 START TEST raid_read_error_test 00:13:16.320 ************************************ 00:13:16.320 14:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:16.320 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:16.320 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:16.320 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:16.320 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Li8m3wTj3R 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75275 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75275 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75275 ']' 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.321 14:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.579 [2024-11-20 14:30:17.446925] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:13:16.579 [2024-11-20 14:30:17.447115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75275 ] 00:13:16.836 [2024-11-20 14:30:17.638259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.836 [2024-11-20 14:30:17.799246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.096 [2024-11-20 14:30:18.052094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.096 [2024-11-20 14:30:18.052192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.663 BaseBdev1_malloc 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.663 true 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.663 [2024-11-20 14:30:18.512098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:17.663 [2024-11-20 14:30:18.512167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.663 [2024-11-20 14:30:18.512197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:17.663 [2024-11-20 14:30:18.512215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.663 [2024-11-20 14:30:18.515059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.663 [2024-11-20 14:30:18.515111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.663 BaseBdev1 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.663 BaseBdev2_malloc 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.663 true 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.663 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.663 [2024-11-20 14:30:18.572930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:17.663 [2024-11-20 14:30:18.573002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.663 [2024-11-20 14:30:18.573029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:17.663 [2024-11-20 14:30:18.573046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.664 [2024-11-20 14:30:18.575938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.664 [2024-11-20 14:30:18.575990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.664 BaseBdev2 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.664 BaseBdev3_malloc 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.664 true 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.664 [2024-11-20 14:30:18.650167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:17.664 [2024-11-20 14:30:18.650237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.664 [2024-11-20 14:30:18.650269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:17.664 [2024-11-20 14:30:18.650289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.664 [2024-11-20 14:30:18.653226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.664 [2024-11-20 14:30:18.653277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:17.664 BaseBdev3 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.664 BaseBdev4_malloc 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.664 true 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.664 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.664 [2024-11-20 14:30:18.713844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:17.664 [2024-11-20 14:30:18.713953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.664 [2024-11-20 14:30:18.713993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:17.664 [2024-11-20 14:30:18.714011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.664 [2024-11-20 14:30:18.716997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.664 [2024-11-20 14:30:18.717050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:17.664 BaseBdev4 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.922 [2024-11-20 14:30:18.721971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.922 [2024-11-20 14:30:18.724594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.922 [2024-11-20 14:30:18.724719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.922 [2024-11-20 14:30:18.724819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:17.922 [2024-11-20 14:30:18.725147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:17.922 [2024-11-20 14:30:18.725179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:17.922 [2024-11-20 14:30:18.725493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:17.922 [2024-11-20 14:30:18.725747] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:17.922 [2024-11-20 14:30:18.725774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:17.922 [2024-11-20 14:30:18.726029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.922 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.923 "name": "raid_bdev1", 00:13:17.923 "uuid": "df824c15-6f8c-4af7-8c78-f3ba33f1494c", 00:13:17.923 "strip_size_kb": 0, 00:13:17.923 "state": "online", 00:13:17.923 "raid_level": "raid1", 00:13:17.923 "superblock": true, 00:13:17.923 "num_base_bdevs": 4, 00:13:17.923 "num_base_bdevs_discovered": 4, 00:13:17.923 "num_base_bdevs_operational": 4, 00:13:17.923 "base_bdevs_list": [ 00:13:17.923 { 00:13:17.923 "name": "BaseBdev1", 00:13:17.923 "uuid": "3cdf2fd5-ee32-59b4-bcb8-9f62af821fa6", 00:13:17.923 "is_configured": true, 00:13:17.923 "data_offset": 2048, 00:13:17.923 "data_size": 63488 00:13:17.923 }, 00:13:17.923 { 00:13:17.923 "name": "BaseBdev2", 00:13:17.923 "uuid": "617d6947-4b37-51a9-9794-0126db4bcb52", 00:13:17.923 "is_configured": true, 00:13:17.923 "data_offset": 2048, 00:13:17.923 "data_size": 63488 00:13:17.923 }, 00:13:17.923 { 00:13:17.923 "name": "BaseBdev3", 00:13:17.923 "uuid": "5307589b-bd4b-5ac8-b646-2bb972a96281", 00:13:17.923 "is_configured": true, 00:13:17.923 "data_offset": 2048, 00:13:17.923 "data_size": 63488 00:13:17.923 }, 00:13:17.923 { 00:13:17.923 "name": "BaseBdev4", 00:13:17.923 "uuid": "44b5c89c-2672-5317-9f7b-1124aa2dbbb3", 00:13:17.923 "is_configured": true, 00:13:17.923 "data_offset": 2048, 00:13:17.923 "data_size": 63488 00:13:17.923 } 00:13:17.923 ] 00:13:17.923 }' 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.923 14:30:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.491 14:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:18.491 14:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:18.491 [2024-11-20 14:30:19.375689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.426 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.427 "name": "raid_bdev1", 00:13:19.427 "uuid": "df824c15-6f8c-4af7-8c78-f3ba33f1494c", 00:13:19.427 "strip_size_kb": 0, 00:13:19.427 "state": "online", 00:13:19.427 "raid_level": "raid1", 00:13:19.427 "superblock": true, 00:13:19.427 "num_base_bdevs": 4, 00:13:19.427 "num_base_bdevs_discovered": 4, 00:13:19.427 "num_base_bdevs_operational": 4, 00:13:19.427 "base_bdevs_list": [ 00:13:19.427 { 00:13:19.427 "name": "BaseBdev1", 00:13:19.427 "uuid": "3cdf2fd5-ee32-59b4-bcb8-9f62af821fa6", 00:13:19.427 "is_configured": true, 00:13:19.427 "data_offset": 2048, 00:13:19.427 "data_size": 63488 00:13:19.427 }, 00:13:19.427 { 00:13:19.427 "name": "BaseBdev2", 00:13:19.427 "uuid": "617d6947-4b37-51a9-9794-0126db4bcb52", 00:13:19.427 "is_configured": true, 00:13:19.427 "data_offset": 2048, 00:13:19.427 "data_size": 63488 00:13:19.427 }, 00:13:19.427 { 00:13:19.427 "name": "BaseBdev3", 00:13:19.427 "uuid": "5307589b-bd4b-5ac8-b646-2bb972a96281", 00:13:19.427 "is_configured": true, 00:13:19.427 "data_offset": 2048, 00:13:19.427 "data_size": 63488 00:13:19.427 }, 00:13:19.427 { 00:13:19.427 "name": "BaseBdev4", 00:13:19.427 "uuid": "44b5c89c-2672-5317-9f7b-1124aa2dbbb3", 00:13:19.427 "is_configured": true, 00:13:19.427 "data_offset": 2048, 00:13:19.427 "data_size": 63488 00:13:19.427 } 00:13:19.427 ] 00:13:19.427 }' 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.427 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.994 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.994 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.994 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.994 [2024-11-20 14:30:20.775387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.995 [2024-11-20 14:30:20.775461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.995 [2024-11-20 14:30:20.779079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.995 [2024-11-20 14:30:20.779179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.995 [2024-11-20 14:30:20.779440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.995 [2024-11-20 14:30:20.779477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:19.995 { 00:13:19.995 "results": [ 00:13:19.995 { 00:13:19.995 "job": "raid_bdev1", 00:13:19.995 "core_mask": "0x1", 00:13:19.995 "workload": "randrw", 00:13:19.995 "percentage": 50, 00:13:19.995 "status": "finished", 00:13:19.995 "queue_depth": 1, 00:13:19.995 "io_size": 131072, 00:13:19.995 "runtime": 1.397149, 00:13:19.995 "iops": 6760.195226135509, 00:13:19.995 "mibps": 845.0244032669386, 00:13:19.995 "io_failed": 0, 00:13:19.995 "io_timeout": 0, 00:13:19.995 "avg_latency_us": 143.1968814668656, 00:13:19.995 "min_latency_us": 41.658181818181816, 00:13:19.995 "max_latency_us": 1899.0545454545454 00:13:19.995 } 00:13:19.995 ], 00:13:19.995 "core_count": 1 00:13:19.995 } 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75275 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75275 ']' 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75275 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75275 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.995 killing process with pid 75275 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75275' 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75275 00:13:19.995 [2024-11-20 14:30:20.816172] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.995 14:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75275 00:13:20.253 [2024-11-20 14:30:21.127866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Li8m3wTj3R 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:21.654 00:13:21.654 real 0m5.044s 00:13:21.654 user 0m6.159s 00:13:21.654 sys 0m0.643s 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.654 14:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 ************************************ 00:13:21.654 END TEST raid_read_error_test 00:13:21.654 ************************************ 00:13:21.654 14:30:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:21.654 14:30:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:21.654 14:30:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.654 14:30:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 ************************************ 00:13:21.654 START TEST raid_write_error_test 00:13:21.654 ************************************ 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v2WObIo826 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75422 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75422 00:13:21.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75422 ']' 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.654 14:30:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.654 [2024-11-20 14:30:22.527897] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:13:21.654 [2024-11-20 14:30:22.528295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75422 ] 00:13:21.654 [2024-11-20 14:30:22.707809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.912 [2024-11-20 14:30:22.853194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.170 [2024-11-20 14:30:23.060164] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.170 [2024-11-20 14:30:23.060210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:22.736 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.736 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:22.736 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.736 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 BaseBdev1_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 true 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 [2024-11-20 14:30:23.660249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:22.737 [2024-11-20 14:30:23.660318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.737 [2024-11-20 14:30:23.660345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:22.737 [2024-11-20 14:30:23.660362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.737 [2024-11-20 14:30:23.663236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.737 [2024-11-20 14:30:23.663283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.737 BaseBdev1 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 BaseBdev2_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 true 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 [2024-11-20 14:30:23.720270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:22.737 [2024-11-20 14:30:23.720378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.737 [2024-11-20 14:30:23.720426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:22.737 [2024-11-20 14:30:23.720458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.737 [2024-11-20 14:30:23.724777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.737 [2024-11-20 14:30:23.724851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.737 BaseBdev2 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 BaseBdev3_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 true 00:13:22.737 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.996 [2024-11-20 14:30:23.792748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:22.996 [2024-11-20 14:30:23.793018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.996 [2024-11-20 14:30:23.793085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:22.996 [2024-11-20 14:30:23.793194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.996 [2024-11-20 14:30:23.796332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.996 [2024-11-20 14:30:23.796382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:22.996 BaseBdev3 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.996 BaseBdev4_malloc 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.996 true 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.996 [2024-11-20 14:30:23.858376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:22.996 [2024-11-20 14:30:23.858658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.996 [2024-11-20 14:30:23.858737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:22.996 [2024-11-20 14:30:23.858875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.996 [2024-11-20 14:30:23.862162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.996 [2024-11-20 14:30:23.862351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:22.996 BaseBdev4 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.996 [2024-11-20 14:30:23.866783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.996 [2024-11-20 14:30:23.869619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.996 [2024-11-20 14:30:23.869910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.996 [2024-11-20 14:30:23.870077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:22.996 [2024-11-20 14:30:23.870558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:22.996 [2024-11-20 14:30:23.870633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:22.996 [2024-11-20 14:30:23.871130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:22.996 [2024-11-20 14:30:23.871516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:22.996 [2024-11-20 14:30:23.871696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:22.996 [2024-11-20 14:30:23.872233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.996 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.996 "name": "raid_bdev1", 00:13:22.996 "uuid": "fab62d67-4b39-45ae-b334-19b8e9c79f19", 00:13:22.996 "strip_size_kb": 0, 00:13:22.996 "state": "online", 00:13:22.996 "raid_level": "raid1", 00:13:22.996 "superblock": true, 00:13:22.996 "num_base_bdevs": 4, 00:13:22.996 "num_base_bdevs_discovered": 4, 00:13:22.996 "num_base_bdevs_operational": 4, 00:13:22.996 "base_bdevs_list": [ 00:13:22.996 { 00:13:22.996 "name": "BaseBdev1", 00:13:22.996 "uuid": "5e02e7e1-a1f8-557f-968c-16ee4ac8b883", 00:13:22.996 "is_configured": true, 00:13:22.996 "data_offset": 2048, 00:13:22.996 "data_size": 63488 00:13:22.996 }, 00:13:22.996 { 00:13:22.996 "name": "BaseBdev2", 00:13:22.996 "uuid": "e377f729-7a5f-523b-945c-531fa92093cd", 00:13:22.996 "is_configured": true, 00:13:22.996 "data_offset": 2048, 00:13:22.996 "data_size": 63488 00:13:22.996 }, 00:13:22.997 { 00:13:22.997 "name": "BaseBdev3", 00:13:22.997 "uuid": "67c5474d-0add-5cff-9875-609d7fda5fcc", 00:13:22.997 "is_configured": true, 00:13:22.997 "data_offset": 2048, 00:13:22.997 "data_size": 63488 00:13:22.997 }, 00:13:22.997 { 00:13:22.997 "name": "BaseBdev4", 00:13:22.997 "uuid": "3fb4ab25-b68a-57da-85cc-e1c57db11fb6", 00:13:22.997 "is_configured": true, 00:13:22.997 "data_offset": 2048, 00:13:22.997 "data_size": 63488 00:13:22.997 } 00:13:22.997 ] 00:13:22.997 }' 00:13:22.997 14:30:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.997 14:30:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.563 14:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:23.563 14:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:23.563 [2024-11-20 14:30:24.516557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.497 [2024-11-20 14:30:25.389620] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:24.497 [2024-11-20 14:30:25.389732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.497 [2024-11-20 14:30:25.390048] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.497 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.497 "name": "raid_bdev1", 00:13:24.497 "uuid": "fab62d67-4b39-45ae-b334-19b8e9c79f19", 00:13:24.497 "strip_size_kb": 0, 00:13:24.497 "state": "online", 00:13:24.497 "raid_level": "raid1", 00:13:24.497 "superblock": true, 00:13:24.497 "num_base_bdevs": 4, 00:13:24.497 "num_base_bdevs_discovered": 3, 00:13:24.497 "num_base_bdevs_operational": 3, 00:13:24.497 "base_bdevs_list": [ 00:13:24.497 { 00:13:24.497 "name": null, 00:13:24.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.497 "is_configured": false, 00:13:24.497 "data_offset": 0, 00:13:24.497 "data_size": 63488 00:13:24.497 }, 00:13:24.497 { 00:13:24.497 "name": "BaseBdev2", 00:13:24.497 "uuid": "e377f729-7a5f-523b-945c-531fa92093cd", 00:13:24.497 "is_configured": true, 00:13:24.497 "data_offset": 2048, 00:13:24.497 "data_size": 63488 00:13:24.497 }, 00:13:24.497 { 00:13:24.497 "name": "BaseBdev3", 00:13:24.497 "uuid": "67c5474d-0add-5cff-9875-609d7fda5fcc", 00:13:24.497 "is_configured": true, 00:13:24.497 "data_offset": 2048, 00:13:24.497 "data_size": 63488 00:13:24.497 }, 00:13:24.497 { 00:13:24.497 "name": "BaseBdev4", 00:13:24.497 "uuid": "3fb4ab25-b68a-57da-85cc-e1c57db11fb6", 00:13:24.497 "is_configured": true, 00:13:24.497 "data_offset": 2048, 00:13:24.497 "data_size": 63488 00:13:24.497 } 00:13:24.497 ] 00:13:24.497 }' 00:13:24.498 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.498 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.064 [2024-11-20 14:30:25.933712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.064 [2024-11-20 14:30:25.933746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.064 { 00:13:25.064 "results": [ 00:13:25.064 { 00:13:25.064 "job": "raid_bdev1", 00:13:25.064 "core_mask": "0x1", 00:13:25.064 "workload": "randrw", 00:13:25.064 "percentage": 50, 00:13:25.064 "status": "finished", 00:13:25.064 "queue_depth": 1, 00:13:25.064 "io_size": 131072, 00:13:25.064 "runtime": 1.414221, 00:13:25.064 "iops": 8220.07310031459, 00:13:25.064 "mibps": 1027.5091375393238, 00:13:25.064 "io_failed": 0, 00:13:25.064 "io_timeout": 0, 00:13:25.064 "avg_latency_us": 117.47292965786902, 00:13:25.064 "min_latency_us": 37.46909090909091, 00:13:25.064 "max_latency_us": 1891.6072727272726 00:13:25.064 } 00:13:25.064 ], 00:13:25.064 "core_count": 1 00:13:25.064 } 00:13:25.064 [2024-11-20 14:30:25.937392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.064 [2024-11-20 14:30:25.937456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.064 [2024-11-20 14:30:25.937651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.064 [2024-11-20 14:30:25.937666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75422 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75422 ']' 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75422 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75422 00:13:25.064 killing process with pid 75422 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75422' 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75422 00:13:25.064 [2024-11-20 14:30:25.972754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.064 14:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75422 00:13:25.322 [2024-11-20 14:30:26.270541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.698 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v2WObIo826 00:13:26.698 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:26.698 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:26.698 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:26.698 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:26.698 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:26.699 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:26.699 14:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:26.699 ************************************ 00:13:26.699 END TEST raid_write_error_test 00:13:26.699 ************************************ 00:13:26.699 00:13:26.699 real 0m5.017s 00:13:26.699 user 0m6.186s 00:13:26.699 sys 0m0.642s 00:13:26.699 14:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.699 14:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.699 14:30:27 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:26.699 14:30:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:26.699 14:30:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:26.699 14:30:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:26.699 14:30:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.699 14:30:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.699 ************************************ 00:13:26.699 START TEST raid_rebuild_test 00:13:26.699 ************************************ 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75570 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75570 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75570 ']' 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.699 14:30:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.699 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:26.699 Zero copy mechanism will not be used. 00:13:26.699 [2024-11-20 14:30:27.595573] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:13:26.699 [2024-11-20 14:30:27.595762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75570 ] 00:13:26.958 [2024-11-20 14:30:27.776450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.958 [2024-11-20 14:30:27.914645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.215 [2024-11-20 14:30:28.125820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.215 [2024-11-20 14:30:28.125892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.782 BaseBdev1_malloc 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.782 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.782 [2024-11-20 14:30:28.709418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:27.783 [2024-11-20 14:30:28.709644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.783 [2024-11-20 14:30:28.709718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:27.783 [2024-11-20 14:30:28.709967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.783 [2024-11-20 14:30:28.712883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.783 [2024-11-20 14:30:28.713060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:27.783 BaseBdev1 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.783 BaseBdev2_malloc 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.783 [2024-11-20 14:30:28.759457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:27.783 [2024-11-20 14:30:28.759578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.783 [2024-11-20 14:30:28.759742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:27.783 [2024-11-20 14:30:28.759878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.783 [2024-11-20 14:30:28.762862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.783 [2024-11-20 14:30:28.763037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:27.783 BaseBdev2 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.783 spare_malloc 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.783 spare_delay 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.783 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.783 [2024-11-20 14:30:28.837396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.783 [2024-11-20 14:30:28.837601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.783 [2024-11-20 14:30:28.837703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:28.041 [2024-11-20 14:30:28.837835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.041 [2024-11-20 14:30:28.840917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.041 [2024-11-20 14:30:28.840971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.041 spare 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.041 [2024-11-20 14:30:28.845615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.041 [2024-11-20 14:30:28.848346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.041 [2024-11-20 14:30:28.848619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:28.041 [2024-11-20 14:30:28.848775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:28.041 [2024-11-20 14:30:28.849116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:28.041 [2024-11-20 14:30:28.849374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:28.041 [2024-11-20 14:30:28.849395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:28.041 [2024-11-20 14:30:28.849676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.041 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.041 "name": "raid_bdev1", 00:13:28.041 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:28.041 "strip_size_kb": 0, 00:13:28.041 "state": "online", 00:13:28.041 "raid_level": "raid1", 00:13:28.041 "superblock": false, 00:13:28.041 "num_base_bdevs": 2, 00:13:28.041 "num_base_bdevs_discovered": 2, 00:13:28.041 "num_base_bdevs_operational": 2, 00:13:28.041 "base_bdevs_list": [ 00:13:28.041 { 00:13:28.041 "name": "BaseBdev1", 00:13:28.041 "uuid": "0da70962-b6d3-55a0-b8a5-1b435b9e5bb5", 00:13:28.041 "is_configured": true, 00:13:28.041 "data_offset": 0, 00:13:28.042 "data_size": 65536 00:13:28.042 }, 00:13:28.042 { 00:13:28.042 "name": "BaseBdev2", 00:13:28.042 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:28.042 "is_configured": true, 00:13:28.042 "data_offset": 0, 00:13:28.042 "data_size": 65536 00:13:28.042 } 00:13:28.042 ] 00:13:28.042 }' 00:13:28.042 14:30:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.042 14:30:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.612 [2024-11-20 14:30:29.378231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.612 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:28.871 [2024-11-20 14:30:29.761983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:28.871 /dev/nbd0 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.871 1+0 records in 00:13:28.871 1+0 records out 00:13:28.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605893 s, 6.8 MB/s 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:28.871 14:30:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:35.426 65536+0 records in 00:13:35.426 65536+0 records out 00:13:35.426 33554432 bytes (34 MB, 32 MiB) copied, 6.63262 s, 5.1 MB/s 00:13:35.426 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:35.426 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:35.426 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:35.426 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.426 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:35.426 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.426 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:35.683 [2024-11-20 14:30:36.718803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.942 [2024-11-20 14:30:36.758904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.942 "name": "raid_bdev1", 00:13:35.942 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:35.942 "strip_size_kb": 0, 00:13:35.942 "state": "online", 00:13:35.942 "raid_level": "raid1", 00:13:35.942 "superblock": false, 00:13:35.942 "num_base_bdevs": 2, 00:13:35.942 "num_base_bdevs_discovered": 1, 00:13:35.942 "num_base_bdevs_operational": 1, 00:13:35.942 "base_bdevs_list": [ 00:13:35.942 { 00:13:35.942 "name": null, 00:13:35.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.942 "is_configured": false, 00:13:35.942 "data_offset": 0, 00:13:35.942 "data_size": 65536 00:13:35.942 }, 00:13:35.942 { 00:13:35.942 "name": "BaseBdev2", 00:13:35.942 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:35.942 "is_configured": true, 00:13:35.942 "data_offset": 0, 00:13:35.942 "data_size": 65536 00:13:35.942 } 00:13:35.942 ] 00:13:35.942 }' 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.942 14:30:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.509 14:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.509 14:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.509 14:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.509 [2024-11-20 14:30:37.263129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.509 [2024-11-20 14:30:37.280991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:36.509 14:30:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.509 14:30:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:36.509 [2024-11-20 14:30:37.283665] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.444 "name": "raid_bdev1", 00:13:37.444 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:37.444 "strip_size_kb": 0, 00:13:37.444 "state": "online", 00:13:37.444 "raid_level": "raid1", 00:13:37.444 "superblock": false, 00:13:37.444 "num_base_bdevs": 2, 00:13:37.444 "num_base_bdevs_discovered": 2, 00:13:37.444 "num_base_bdevs_operational": 2, 00:13:37.444 "process": { 00:13:37.444 "type": "rebuild", 00:13:37.444 "target": "spare", 00:13:37.444 "progress": { 00:13:37.444 "blocks": 20480, 00:13:37.444 "percent": 31 00:13:37.444 } 00:13:37.444 }, 00:13:37.444 "base_bdevs_list": [ 00:13:37.444 { 00:13:37.444 "name": "spare", 00:13:37.444 "uuid": "7709b27e-eebd-5314-93f8-ccc1eda60354", 00:13:37.444 "is_configured": true, 00:13:37.444 "data_offset": 0, 00:13:37.444 "data_size": 65536 00:13:37.444 }, 00:13:37.444 { 00:13:37.444 "name": "BaseBdev2", 00:13:37.444 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:37.444 "is_configured": true, 00:13:37.444 "data_offset": 0, 00:13:37.444 "data_size": 65536 00:13:37.444 } 00:13:37.444 ] 00:13:37.444 }' 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.444 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.444 [2024-11-20 14:30:38.457507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.444 [2024-11-20 14:30:38.494154] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.444 [2024-11-20 14:30:38.494380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.444 [2024-11-20 14:30:38.494410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.444 [2024-11-20 14:30:38.494427] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.702 "name": "raid_bdev1", 00:13:37.702 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:37.702 "strip_size_kb": 0, 00:13:37.702 "state": "online", 00:13:37.702 "raid_level": "raid1", 00:13:37.702 "superblock": false, 00:13:37.702 "num_base_bdevs": 2, 00:13:37.702 "num_base_bdevs_discovered": 1, 00:13:37.702 "num_base_bdevs_operational": 1, 00:13:37.702 "base_bdevs_list": [ 00:13:37.702 { 00:13:37.702 "name": null, 00:13:37.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.702 "is_configured": false, 00:13:37.702 "data_offset": 0, 00:13:37.702 "data_size": 65536 00:13:37.702 }, 00:13:37.702 { 00:13:37.702 "name": "BaseBdev2", 00:13:37.702 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:37.702 "is_configured": true, 00:13:37.702 "data_offset": 0, 00:13:37.702 "data_size": 65536 00:13:37.702 } 00:13:37.702 ] 00:13:37.702 }' 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.702 14:30:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.271 "name": "raid_bdev1", 00:13:38.271 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:38.271 "strip_size_kb": 0, 00:13:38.271 "state": "online", 00:13:38.271 "raid_level": "raid1", 00:13:38.271 "superblock": false, 00:13:38.271 "num_base_bdevs": 2, 00:13:38.271 "num_base_bdevs_discovered": 1, 00:13:38.271 "num_base_bdevs_operational": 1, 00:13:38.271 "base_bdevs_list": [ 00:13:38.271 { 00:13:38.271 "name": null, 00:13:38.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.271 "is_configured": false, 00:13:38.271 "data_offset": 0, 00:13:38.271 "data_size": 65536 00:13:38.271 }, 00:13:38.271 { 00:13:38.271 "name": "BaseBdev2", 00:13:38.271 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:38.271 "is_configured": true, 00:13:38.271 "data_offset": 0, 00:13:38.271 "data_size": 65536 00:13:38.271 } 00:13:38.271 ] 00:13:38.271 }' 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.271 [2024-11-20 14:30:39.243075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.271 [2024-11-20 14:30:39.259718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.271 14:30:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:38.271 [2024-11-20 14:30:39.262841] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.650 "name": "raid_bdev1", 00:13:39.650 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:39.650 "strip_size_kb": 0, 00:13:39.650 "state": "online", 00:13:39.650 "raid_level": "raid1", 00:13:39.650 "superblock": false, 00:13:39.650 "num_base_bdevs": 2, 00:13:39.650 "num_base_bdevs_discovered": 2, 00:13:39.650 "num_base_bdevs_operational": 2, 00:13:39.650 "process": { 00:13:39.650 "type": "rebuild", 00:13:39.650 "target": "spare", 00:13:39.650 "progress": { 00:13:39.650 "blocks": 20480, 00:13:39.650 "percent": 31 00:13:39.650 } 00:13:39.650 }, 00:13:39.650 "base_bdevs_list": [ 00:13:39.650 { 00:13:39.650 "name": "spare", 00:13:39.650 "uuid": "7709b27e-eebd-5314-93f8-ccc1eda60354", 00:13:39.650 "is_configured": true, 00:13:39.650 "data_offset": 0, 00:13:39.650 "data_size": 65536 00:13:39.650 }, 00:13:39.650 { 00:13:39.650 "name": "BaseBdev2", 00:13:39.650 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:39.650 "is_configured": true, 00:13:39.650 "data_offset": 0, 00:13:39.650 "data_size": 65536 00:13:39.650 } 00:13:39.650 ] 00:13:39.650 }' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=402 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.650 "name": "raid_bdev1", 00:13:39.650 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:39.650 "strip_size_kb": 0, 00:13:39.650 "state": "online", 00:13:39.650 "raid_level": "raid1", 00:13:39.650 "superblock": false, 00:13:39.650 "num_base_bdevs": 2, 00:13:39.650 "num_base_bdevs_discovered": 2, 00:13:39.650 "num_base_bdevs_operational": 2, 00:13:39.650 "process": { 00:13:39.650 "type": "rebuild", 00:13:39.650 "target": "spare", 00:13:39.650 "progress": { 00:13:39.650 "blocks": 22528, 00:13:39.650 "percent": 34 00:13:39.650 } 00:13:39.650 }, 00:13:39.650 "base_bdevs_list": [ 00:13:39.650 { 00:13:39.650 "name": "spare", 00:13:39.650 "uuid": "7709b27e-eebd-5314-93f8-ccc1eda60354", 00:13:39.650 "is_configured": true, 00:13:39.650 "data_offset": 0, 00:13:39.650 "data_size": 65536 00:13:39.650 }, 00:13:39.650 { 00:13:39.650 "name": "BaseBdev2", 00:13:39.650 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:39.650 "is_configured": true, 00:13:39.650 "data_offset": 0, 00:13:39.650 "data_size": 65536 00:13:39.650 } 00:13:39.650 ] 00:13:39.650 }' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.650 14:30:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.583 14:30:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.842 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.842 "name": "raid_bdev1", 00:13:40.842 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:40.842 "strip_size_kb": 0, 00:13:40.842 "state": "online", 00:13:40.842 "raid_level": "raid1", 00:13:40.842 "superblock": false, 00:13:40.842 "num_base_bdevs": 2, 00:13:40.842 "num_base_bdevs_discovered": 2, 00:13:40.842 "num_base_bdevs_operational": 2, 00:13:40.842 "process": { 00:13:40.842 "type": "rebuild", 00:13:40.842 "target": "spare", 00:13:40.842 "progress": { 00:13:40.842 "blocks": 47104, 00:13:40.842 "percent": 71 00:13:40.842 } 00:13:40.842 }, 00:13:40.842 "base_bdevs_list": [ 00:13:40.842 { 00:13:40.842 "name": "spare", 00:13:40.842 "uuid": "7709b27e-eebd-5314-93f8-ccc1eda60354", 00:13:40.842 "is_configured": true, 00:13:40.842 "data_offset": 0, 00:13:40.842 "data_size": 65536 00:13:40.842 }, 00:13:40.842 { 00:13:40.842 "name": "BaseBdev2", 00:13:40.842 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:40.842 "is_configured": true, 00:13:40.842 "data_offset": 0, 00:13:40.842 "data_size": 65536 00:13:40.842 } 00:13:40.842 ] 00:13:40.842 }' 00:13:40.842 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.842 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.842 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.842 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.842 14:30:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.778 [2024-11-20 14:30:42.487668] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:41.778 [2024-11-20 14:30:42.487962] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:41.778 [2024-11-20 14:30:42.488041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.778 "name": "raid_bdev1", 00:13:41.778 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:41.778 "strip_size_kb": 0, 00:13:41.778 "state": "online", 00:13:41.778 "raid_level": "raid1", 00:13:41.778 "superblock": false, 00:13:41.778 "num_base_bdevs": 2, 00:13:41.778 "num_base_bdevs_discovered": 2, 00:13:41.778 "num_base_bdevs_operational": 2, 00:13:41.778 "base_bdevs_list": [ 00:13:41.778 { 00:13:41.778 "name": "spare", 00:13:41.778 "uuid": "7709b27e-eebd-5314-93f8-ccc1eda60354", 00:13:41.778 "is_configured": true, 00:13:41.778 "data_offset": 0, 00:13:41.778 "data_size": 65536 00:13:41.778 }, 00:13:41.778 { 00:13:41.778 "name": "BaseBdev2", 00:13:41.778 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:41.778 "is_configured": true, 00:13:41.778 "data_offset": 0, 00:13:41.778 "data_size": 65536 00:13:41.778 } 00:13:41.778 ] 00:13:41.778 }' 00:13:41.778 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.037 "name": "raid_bdev1", 00:13:42.037 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:42.037 "strip_size_kb": 0, 00:13:42.037 "state": "online", 00:13:42.037 "raid_level": "raid1", 00:13:42.037 "superblock": false, 00:13:42.037 "num_base_bdevs": 2, 00:13:42.037 "num_base_bdevs_discovered": 2, 00:13:42.037 "num_base_bdevs_operational": 2, 00:13:42.037 "base_bdevs_list": [ 00:13:42.037 { 00:13:42.037 "name": "spare", 00:13:42.037 "uuid": "7709b27e-eebd-5314-93f8-ccc1eda60354", 00:13:42.037 "is_configured": true, 00:13:42.037 "data_offset": 0, 00:13:42.037 "data_size": 65536 00:13:42.037 }, 00:13:42.037 { 00:13:42.037 "name": "BaseBdev2", 00:13:42.037 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:42.037 "is_configured": true, 00:13:42.037 "data_offset": 0, 00:13:42.037 "data_size": 65536 00:13:42.037 } 00:13:42.037 ] 00:13:42.037 }' 00:13:42.037 14:30:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.037 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.037 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.295 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.296 "name": "raid_bdev1", 00:13:42.296 "uuid": "3baf8155-1b10-4424-84f6-ab374d539769", 00:13:42.296 "strip_size_kb": 0, 00:13:42.296 "state": "online", 00:13:42.296 "raid_level": "raid1", 00:13:42.296 "superblock": false, 00:13:42.296 "num_base_bdevs": 2, 00:13:42.296 "num_base_bdevs_discovered": 2, 00:13:42.296 "num_base_bdevs_operational": 2, 00:13:42.296 "base_bdevs_list": [ 00:13:42.296 { 00:13:42.296 "name": "spare", 00:13:42.296 "uuid": "7709b27e-eebd-5314-93f8-ccc1eda60354", 00:13:42.296 "is_configured": true, 00:13:42.296 "data_offset": 0, 00:13:42.296 "data_size": 65536 00:13:42.296 }, 00:13:42.296 { 00:13:42.296 "name": "BaseBdev2", 00:13:42.296 "uuid": "2600bdb0-4c06-5b2b-96af-e262aee9d8c3", 00:13:42.296 "is_configured": true, 00:13:42.296 "data_offset": 0, 00:13:42.296 "data_size": 65536 00:13:42.296 } 00:13:42.296 ] 00:13:42.296 }' 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.296 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.862 [2024-11-20 14:30:43.642379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.862 [2024-11-20 14:30:43.642419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.862 [2024-11-20 14:30:43.642539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.862 [2024-11-20 14:30:43.642630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.862 [2024-11-20 14:30:43.642680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:42.862 14:30:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:43.121 /dev/nbd0 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.121 1+0 records in 00:13:43.121 1+0 records out 00:13:43.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033798 s, 12.1 MB/s 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.121 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:43.381 /dev/nbd1 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.381 1+0 records in 00:13:43.381 1+0 records out 00:13:43.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050755 s, 8.1 MB/s 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.381 14:30:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:43.639 14:30:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:43.639 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.639 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.639 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.639 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:43.639 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.639 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.939 14:30:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75570 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75570 ']' 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75570 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75570 00:13:44.197 killing process with pid 75570 00:13:44.197 Received shutdown signal, test time was about 60.000000 seconds 00:13:44.197 00:13:44.197 Latency(us) 00:13:44.197 [2024-11-20T14:30:45.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.197 [2024-11-20T14:30:45.254Z] =================================================================================================================== 00:13:44.197 [2024-11-20T14:30:45.254Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75570' 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75570 00:13:44.197 [2024-11-20 14:30:45.128482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.197 14:30:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75570 00:13:44.456 [2024-11-20 14:30:45.398106] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:45.832 ************************************ 00:13:45.832 END TEST raid_rebuild_test 00:13:45.832 ************************************ 00:13:45.832 00:13:45.832 real 0m18.960s 00:13:45.832 user 0m21.473s 00:13:45.832 sys 0m3.539s 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.832 14:30:46 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:45.832 14:30:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:45.832 14:30:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.832 14:30:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.832 ************************************ 00:13:45.832 START TEST raid_rebuild_test_sb 00:13:45.832 ************************************ 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76018 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76018 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76018 ']' 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.832 14:30:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.832 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:45.832 Zero copy mechanism will not be used. 00:13:45.832 [2024-11-20 14:30:46.632805] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:13:45.832 [2024-11-20 14:30:46.632994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76018 ] 00:13:45.832 [2024-11-20 14:30:46.817792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.090 [2024-11-20 14:30:46.950019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.348 [2024-11-20 14:30:47.154820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.348 [2024-11-20 14:30:47.154904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.607 BaseBdev1_malloc 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.607 [2024-11-20 14:30:47.645447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:46.607 [2024-11-20 14:30:47.645536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.607 [2024-11-20 14:30:47.645588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:46.607 [2024-11-20 14:30:47.645618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.607 [2024-11-20 14:30:47.649106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.607 [2024-11-20 14:30:47.649320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.607 BaseBdev1 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.607 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.865 BaseBdev2_malloc 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.865 [2024-11-20 14:30:47.702707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:46.865 [2024-11-20 14:30:47.702805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.865 [2024-11-20 14:30:47.702859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:46.865 [2024-11-20 14:30:47.702889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.865 [2024-11-20 14:30:47.706067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.865 [2024-11-20 14:30:47.706123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.865 BaseBdev2 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.865 spare_malloc 00:13:46.865 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.866 spare_delay 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.866 [2024-11-20 14:30:47.774678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:46.866 [2024-11-20 14:30:47.774767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.866 [2024-11-20 14:30:47.774809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:46.866 [2024-11-20 14:30:47.774837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.866 [2024-11-20 14:30:47.778089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.866 [2024-11-20 14:30:47.778146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:46.866 spare 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.866 [2024-11-20 14:30:47.783072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.866 [2024-11-20 14:30:47.785808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.866 [2024-11-20 14:30:47.786074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:46.866 [2024-11-20 14:30:47.786100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.866 [2024-11-20 14:30:47.786427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:46.866 [2024-11-20 14:30:47.786714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:46.866 [2024-11-20 14:30:47.786732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:46.866 [2024-11-20 14:30:47.786971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.866 "name": "raid_bdev1", 00:13:46.866 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:46.866 "strip_size_kb": 0, 00:13:46.866 "state": "online", 00:13:46.866 "raid_level": "raid1", 00:13:46.866 "superblock": true, 00:13:46.866 "num_base_bdevs": 2, 00:13:46.866 "num_base_bdevs_discovered": 2, 00:13:46.866 "num_base_bdevs_operational": 2, 00:13:46.866 "base_bdevs_list": [ 00:13:46.866 { 00:13:46.866 "name": "BaseBdev1", 00:13:46.866 "uuid": "9435c947-9aa8-5787-a8ed-0e20fd743446", 00:13:46.866 "is_configured": true, 00:13:46.866 "data_offset": 2048, 00:13:46.866 "data_size": 63488 00:13:46.866 }, 00:13:46.866 { 00:13:46.866 "name": "BaseBdev2", 00:13:46.866 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:46.866 "is_configured": true, 00:13:46.866 "data_offset": 2048, 00:13:46.866 "data_size": 63488 00:13:46.866 } 00:13:46.866 ] 00:13:46.866 }' 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.866 14:30:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.433 [2024-11-20 14:30:48.307633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.433 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:47.692 [2024-11-20 14:30:48.703454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.692 /dev/nbd0 00:13:47.692 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:47.692 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:47.692 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:47.692 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:47.692 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:47.692 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:47.692 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.950 1+0 records in 00:13:47.950 1+0 records out 00:13:47.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566872 s, 7.2 MB/s 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:47.950 14:30:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:54.536 63488+0 records in 00:13:54.536 63488+0 records out 00:13:54.536 32505856 bytes (33 MB, 31 MiB) copied, 6.16534 s, 5.3 MB/s 00:13:54.536 14:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:54.536 14:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.536 14:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:54.536 14:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.536 14:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:54.536 14:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.536 14:30:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:54.536 [2024-11-20 14:30:55.236061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.536 [2024-11-20 14:30:55.268166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.536 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.537 "name": "raid_bdev1", 00:13:54.537 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:54.537 "strip_size_kb": 0, 00:13:54.537 "state": "online", 00:13:54.537 "raid_level": "raid1", 00:13:54.537 "superblock": true, 00:13:54.537 "num_base_bdevs": 2, 00:13:54.537 "num_base_bdevs_discovered": 1, 00:13:54.537 "num_base_bdevs_operational": 1, 00:13:54.537 "base_bdevs_list": [ 00:13:54.537 { 00:13:54.537 "name": null, 00:13:54.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.537 "is_configured": false, 00:13:54.537 "data_offset": 0, 00:13:54.537 "data_size": 63488 00:13:54.537 }, 00:13:54.537 { 00:13:54.537 "name": "BaseBdev2", 00:13:54.537 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:54.537 "is_configured": true, 00:13:54.537 "data_offset": 2048, 00:13:54.537 "data_size": 63488 00:13:54.537 } 00:13:54.537 ] 00:13:54.537 }' 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.537 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.821 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.821 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.821 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.821 [2024-11-20 14:30:55.776379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.821 [2024-11-20 14:30:55.793600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:54.821 14:30:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.821 14:30:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:54.821 [2024-11-20 14:30:55.796302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.781 14:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.039 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.039 "name": "raid_bdev1", 00:13:56.039 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:56.039 "strip_size_kb": 0, 00:13:56.039 "state": "online", 00:13:56.039 "raid_level": "raid1", 00:13:56.039 "superblock": true, 00:13:56.039 "num_base_bdevs": 2, 00:13:56.039 "num_base_bdevs_discovered": 2, 00:13:56.039 "num_base_bdevs_operational": 2, 00:13:56.039 "process": { 00:13:56.039 "type": "rebuild", 00:13:56.039 "target": "spare", 00:13:56.039 "progress": { 00:13:56.039 "blocks": 20480, 00:13:56.039 "percent": 32 00:13:56.039 } 00:13:56.039 }, 00:13:56.039 "base_bdevs_list": [ 00:13:56.039 { 00:13:56.039 "name": "spare", 00:13:56.039 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:13:56.039 "is_configured": true, 00:13:56.039 "data_offset": 2048, 00:13:56.039 "data_size": 63488 00:13:56.039 }, 00:13:56.039 { 00:13:56.039 "name": "BaseBdev2", 00:13:56.039 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:56.039 "is_configured": true, 00:13:56.039 "data_offset": 2048, 00:13:56.039 "data_size": 63488 00:13:56.039 } 00:13:56.039 ] 00:13:56.039 }' 00:13:56.039 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.039 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.039 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.039 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.040 14:30:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:56.040 14:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.040 14:30:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.040 [2024-11-20 14:30:56.961672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.040 [2024-11-20 14:30:57.005669] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:56.040 [2024-11-20 14:30:57.005759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.040 [2024-11-20 14:30:57.005796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:56.040 [2024-11-20 14:30:57.005816] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.040 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.298 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.298 "name": "raid_bdev1", 00:13:56.298 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:56.298 "strip_size_kb": 0, 00:13:56.298 "state": "online", 00:13:56.298 "raid_level": "raid1", 00:13:56.298 "superblock": true, 00:13:56.298 "num_base_bdevs": 2, 00:13:56.298 "num_base_bdevs_discovered": 1, 00:13:56.298 "num_base_bdevs_operational": 1, 00:13:56.298 "base_bdevs_list": [ 00:13:56.298 { 00:13:56.298 "name": null, 00:13:56.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.298 "is_configured": false, 00:13:56.298 "data_offset": 0, 00:13:56.298 "data_size": 63488 00:13:56.298 }, 00:13:56.298 { 00:13:56.298 "name": "BaseBdev2", 00:13:56.298 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:56.298 "is_configured": true, 00:13:56.298 "data_offset": 2048, 00:13:56.298 "data_size": 63488 00:13:56.298 } 00:13:56.298 ] 00:13:56.298 }' 00:13:56.298 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.298 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.556 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.557 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.557 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.557 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.557 "name": "raid_bdev1", 00:13:56.557 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:56.557 "strip_size_kb": 0, 00:13:56.557 "state": "online", 00:13:56.557 "raid_level": "raid1", 00:13:56.557 "superblock": true, 00:13:56.557 "num_base_bdevs": 2, 00:13:56.557 "num_base_bdevs_discovered": 1, 00:13:56.557 "num_base_bdevs_operational": 1, 00:13:56.557 "base_bdevs_list": [ 00:13:56.557 { 00:13:56.557 "name": null, 00:13:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.557 "is_configured": false, 00:13:56.557 "data_offset": 0, 00:13:56.557 "data_size": 63488 00:13:56.557 }, 00:13:56.557 { 00:13:56.557 "name": "BaseBdev2", 00:13:56.557 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:56.557 "is_configured": true, 00:13:56.557 "data_offset": 2048, 00:13:56.557 "data_size": 63488 00:13:56.557 } 00:13:56.557 ] 00:13:56.557 }' 00:13:56.557 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.815 [2024-11-20 14:30:57.702851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.815 [2024-11-20 14:30:57.719151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.815 14:30:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:56.815 [2024-11-20 14:30:57.721928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.751 "name": "raid_bdev1", 00:13:57.751 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:57.751 "strip_size_kb": 0, 00:13:57.751 "state": "online", 00:13:57.751 "raid_level": "raid1", 00:13:57.751 "superblock": true, 00:13:57.751 "num_base_bdevs": 2, 00:13:57.751 "num_base_bdevs_discovered": 2, 00:13:57.751 "num_base_bdevs_operational": 2, 00:13:57.751 "process": { 00:13:57.751 "type": "rebuild", 00:13:57.751 "target": "spare", 00:13:57.751 "progress": { 00:13:57.751 "blocks": 20480, 00:13:57.751 "percent": 32 00:13:57.751 } 00:13:57.751 }, 00:13:57.751 "base_bdevs_list": [ 00:13:57.751 { 00:13:57.751 "name": "spare", 00:13:57.751 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:13:57.751 "is_configured": true, 00:13:57.751 "data_offset": 2048, 00:13:57.751 "data_size": 63488 00:13:57.751 }, 00:13:57.751 { 00:13:57.751 "name": "BaseBdev2", 00:13:57.751 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:57.751 "is_configured": true, 00:13:57.751 "data_offset": 2048, 00:13:57.751 "data_size": 63488 00:13:57.751 } 00:13:57.751 ] 00:13:57.751 }' 00:13:57.751 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:58.010 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=420 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.010 "name": "raid_bdev1", 00:13:58.010 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:58.010 "strip_size_kb": 0, 00:13:58.010 "state": "online", 00:13:58.010 "raid_level": "raid1", 00:13:58.010 "superblock": true, 00:13:58.010 "num_base_bdevs": 2, 00:13:58.010 "num_base_bdevs_discovered": 2, 00:13:58.010 "num_base_bdevs_operational": 2, 00:13:58.010 "process": { 00:13:58.010 "type": "rebuild", 00:13:58.010 "target": "spare", 00:13:58.010 "progress": { 00:13:58.010 "blocks": 22528, 00:13:58.010 "percent": 35 00:13:58.010 } 00:13:58.010 }, 00:13:58.010 "base_bdevs_list": [ 00:13:58.010 { 00:13:58.010 "name": "spare", 00:13:58.010 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:13:58.010 "is_configured": true, 00:13:58.010 "data_offset": 2048, 00:13:58.010 "data_size": 63488 00:13:58.010 }, 00:13:58.010 { 00:13:58.010 "name": "BaseBdev2", 00:13:58.010 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:58.010 "is_configured": true, 00:13:58.010 "data_offset": 2048, 00:13:58.010 "data_size": 63488 00:13:58.010 } 00:13:58.010 ] 00:13:58.010 }' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.010 14:30:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.010 14:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.010 14:30:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.419 "name": "raid_bdev1", 00:13:59.419 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:13:59.419 "strip_size_kb": 0, 00:13:59.419 "state": "online", 00:13:59.419 "raid_level": "raid1", 00:13:59.419 "superblock": true, 00:13:59.419 "num_base_bdevs": 2, 00:13:59.419 "num_base_bdevs_discovered": 2, 00:13:59.419 "num_base_bdevs_operational": 2, 00:13:59.419 "process": { 00:13:59.419 "type": "rebuild", 00:13:59.419 "target": "spare", 00:13:59.419 "progress": { 00:13:59.419 "blocks": 47104, 00:13:59.419 "percent": 74 00:13:59.419 } 00:13:59.419 }, 00:13:59.419 "base_bdevs_list": [ 00:13:59.419 { 00:13:59.419 "name": "spare", 00:13:59.419 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:13:59.419 "is_configured": true, 00:13:59.419 "data_offset": 2048, 00:13:59.419 "data_size": 63488 00:13:59.419 }, 00:13:59.419 { 00:13:59.419 "name": "BaseBdev2", 00:13:59.419 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:13:59.419 "is_configured": true, 00:13:59.419 "data_offset": 2048, 00:13:59.419 "data_size": 63488 00:13:59.419 } 00:13:59.419 ] 00:13:59.419 }' 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.419 14:31:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.986 [2024-11-20 14:31:00.845681] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.986 [2024-11-20 14:31:00.845804] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.986 [2024-11-20 14:31:00.846000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.244 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.244 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.244 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.244 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.244 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.244 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.245 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.245 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.245 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.245 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.245 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.245 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.245 "name": "raid_bdev1", 00:14:00.245 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:00.245 "strip_size_kb": 0, 00:14:00.245 "state": "online", 00:14:00.245 "raid_level": "raid1", 00:14:00.245 "superblock": true, 00:14:00.245 "num_base_bdevs": 2, 00:14:00.245 "num_base_bdevs_discovered": 2, 00:14:00.245 "num_base_bdevs_operational": 2, 00:14:00.245 "base_bdevs_list": [ 00:14:00.245 { 00:14:00.245 "name": "spare", 00:14:00.245 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:14:00.245 "is_configured": true, 00:14:00.245 "data_offset": 2048, 00:14:00.245 "data_size": 63488 00:14:00.245 }, 00:14:00.245 { 00:14:00.245 "name": "BaseBdev2", 00:14:00.245 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:00.245 "is_configured": true, 00:14:00.245 "data_offset": 2048, 00:14:00.245 "data_size": 63488 00:14:00.245 } 00:14:00.245 ] 00:14:00.245 }' 00:14:00.245 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.503 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.503 "name": "raid_bdev1", 00:14:00.503 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:00.503 "strip_size_kb": 0, 00:14:00.503 "state": "online", 00:14:00.503 "raid_level": "raid1", 00:14:00.503 "superblock": true, 00:14:00.503 "num_base_bdevs": 2, 00:14:00.503 "num_base_bdevs_discovered": 2, 00:14:00.503 "num_base_bdevs_operational": 2, 00:14:00.503 "base_bdevs_list": [ 00:14:00.503 { 00:14:00.503 "name": "spare", 00:14:00.503 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:14:00.503 "is_configured": true, 00:14:00.503 "data_offset": 2048, 00:14:00.503 "data_size": 63488 00:14:00.503 }, 00:14:00.503 { 00:14:00.503 "name": "BaseBdev2", 00:14:00.503 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:00.503 "is_configured": true, 00:14:00.503 "data_offset": 2048, 00:14:00.503 "data_size": 63488 00:14:00.503 } 00:14:00.503 ] 00:14:00.503 }' 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.504 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.762 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.762 "name": "raid_bdev1", 00:14:00.762 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:00.762 "strip_size_kb": 0, 00:14:00.762 "state": "online", 00:14:00.762 "raid_level": "raid1", 00:14:00.762 "superblock": true, 00:14:00.762 "num_base_bdevs": 2, 00:14:00.762 "num_base_bdevs_discovered": 2, 00:14:00.762 "num_base_bdevs_operational": 2, 00:14:00.762 "base_bdevs_list": [ 00:14:00.762 { 00:14:00.762 "name": "spare", 00:14:00.762 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:14:00.762 "is_configured": true, 00:14:00.762 "data_offset": 2048, 00:14:00.762 "data_size": 63488 00:14:00.762 }, 00:14:00.762 { 00:14:00.762 "name": "BaseBdev2", 00:14:00.762 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:00.762 "is_configured": true, 00:14:00.762 "data_offset": 2048, 00:14:00.762 "data_size": 63488 00:14:00.762 } 00:14:00.762 ] 00:14:00.762 }' 00:14:00.762 14:31:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.762 14:31:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.328 [2024-11-20 14:31:02.111537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.328 [2024-11-20 14:31:02.111737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.328 [2024-11-20 14:31:02.111884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.328 [2024-11-20 14:31:02.111982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.328 [2024-11-20 14:31:02.112003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.328 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:01.586 /dev/nbd0 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.586 1+0 records in 00:14:01.586 1+0 records out 00:14:01.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488699 s, 8.4 MB/s 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.586 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:01.844 /dev/nbd1 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.845 1+0 records in 00:14:01.845 1+0 records out 00:14:01.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341653 s, 12.0 MB/s 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.845 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:02.103 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:02.103 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.103 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.103 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.103 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:02.103 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.103 14:31:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.361 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.361 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.361 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.361 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.361 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.361 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.361 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.362 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.362 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.362 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.620 [2024-11-20 14:31:03.661284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.620 [2024-11-20 14:31:03.661473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.620 [2024-11-20 14:31:03.661656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:02.620 [2024-11-20 14:31:03.661796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.620 [2024-11-20 14:31:03.664906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.620 [2024-11-20 14:31:03.665167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.620 [2024-11-20 14:31:03.665328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.620 [2024-11-20 14:31:03.665399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.620 [2024-11-20 14:31:03.665657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.620 spare 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.620 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.879 [2024-11-20 14:31:03.765806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:02.879 [2024-11-20 14:31:03.765887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.879 [2024-11-20 14:31:03.766334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:02.879 [2024-11-20 14:31:03.766603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:02.879 [2024-11-20 14:31:03.766621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:02.879 [2024-11-20 14:31:03.766897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.879 "name": "raid_bdev1", 00:14:02.879 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:02.879 "strip_size_kb": 0, 00:14:02.879 "state": "online", 00:14:02.879 "raid_level": "raid1", 00:14:02.879 "superblock": true, 00:14:02.879 "num_base_bdevs": 2, 00:14:02.879 "num_base_bdevs_discovered": 2, 00:14:02.879 "num_base_bdevs_operational": 2, 00:14:02.879 "base_bdevs_list": [ 00:14:02.879 { 00:14:02.879 "name": "spare", 00:14:02.879 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:14:02.879 "is_configured": true, 00:14:02.879 "data_offset": 2048, 00:14:02.879 "data_size": 63488 00:14:02.879 }, 00:14:02.879 { 00:14:02.879 "name": "BaseBdev2", 00:14:02.879 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:02.879 "is_configured": true, 00:14:02.879 "data_offset": 2048, 00:14:02.879 "data_size": 63488 00:14:02.879 } 00:14:02.879 ] 00:14:02.879 }' 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.879 14:31:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.446 "name": "raid_bdev1", 00:14:03.446 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:03.446 "strip_size_kb": 0, 00:14:03.446 "state": "online", 00:14:03.446 "raid_level": "raid1", 00:14:03.446 "superblock": true, 00:14:03.446 "num_base_bdevs": 2, 00:14:03.446 "num_base_bdevs_discovered": 2, 00:14:03.446 "num_base_bdevs_operational": 2, 00:14:03.446 "base_bdevs_list": [ 00:14:03.446 { 00:14:03.446 "name": "spare", 00:14:03.446 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:14:03.446 "is_configured": true, 00:14:03.446 "data_offset": 2048, 00:14:03.446 "data_size": 63488 00:14:03.446 }, 00:14:03.446 { 00:14:03.446 "name": "BaseBdev2", 00:14:03.446 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:03.446 "is_configured": true, 00:14:03.446 "data_offset": 2048, 00:14:03.446 "data_size": 63488 00:14:03.446 } 00:14:03.446 ] 00:14:03.446 }' 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.446 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 [2024-11-20 14:31:04.517691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.705 "name": "raid_bdev1", 00:14:03.705 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:03.705 "strip_size_kb": 0, 00:14:03.705 "state": "online", 00:14:03.705 "raid_level": "raid1", 00:14:03.705 "superblock": true, 00:14:03.705 "num_base_bdevs": 2, 00:14:03.705 "num_base_bdevs_discovered": 1, 00:14:03.705 "num_base_bdevs_operational": 1, 00:14:03.705 "base_bdevs_list": [ 00:14:03.705 { 00:14:03.705 "name": null, 00:14:03.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.705 "is_configured": false, 00:14:03.705 "data_offset": 0, 00:14:03.705 "data_size": 63488 00:14:03.705 }, 00:14:03.705 { 00:14:03.705 "name": "BaseBdev2", 00:14:03.705 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:03.705 "is_configured": true, 00:14:03.705 "data_offset": 2048, 00:14:03.705 "data_size": 63488 00:14:03.705 } 00:14:03.705 ] 00:14:03.705 }' 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.705 14:31:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.271 14:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.271 14:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.271 14:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.271 [2024-11-20 14:31:05.045919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.271 [2024-11-20 14:31:05.046355] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:04.271 [2024-11-20 14:31:05.046546] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:04.271 [2024-11-20 14:31:05.046871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.271 [2024-11-20 14:31:05.062668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:04.271 14:31:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.271 14:31:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:04.271 [2024-11-20 14:31:05.065469] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.206 "name": "raid_bdev1", 00:14:05.206 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:05.206 "strip_size_kb": 0, 00:14:05.206 "state": "online", 00:14:05.206 "raid_level": "raid1", 00:14:05.206 "superblock": true, 00:14:05.206 "num_base_bdevs": 2, 00:14:05.206 "num_base_bdevs_discovered": 2, 00:14:05.206 "num_base_bdevs_operational": 2, 00:14:05.206 "process": { 00:14:05.206 "type": "rebuild", 00:14:05.206 "target": "spare", 00:14:05.206 "progress": { 00:14:05.206 "blocks": 20480, 00:14:05.206 "percent": 32 00:14:05.206 } 00:14:05.206 }, 00:14:05.206 "base_bdevs_list": [ 00:14:05.206 { 00:14:05.206 "name": "spare", 00:14:05.206 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:14:05.206 "is_configured": true, 00:14:05.206 "data_offset": 2048, 00:14:05.206 "data_size": 63488 00:14:05.206 }, 00:14:05.206 { 00:14:05.206 "name": "BaseBdev2", 00:14:05.206 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:05.206 "is_configured": true, 00:14:05.206 "data_offset": 2048, 00:14:05.206 "data_size": 63488 00:14:05.206 } 00:14:05.206 ] 00:14:05.206 }' 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.206 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.206 [2024-11-20 14:31:06.227451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.464 [2024-11-20 14:31:06.275414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.464 [2024-11-20 14:31:06.275882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.464 [2024-11-20 14:31:06.275919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.464 [2024-11-20 14:31:06.275936] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.464 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.465 "name": "raid_bdev1", 00:14:05.465 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:05.465 "strip_size_kb": 0, 00:14:05.465 "state": "online", 00:14:05.465 "raid_level": "raid1", 00:14:05.465 "superblock": true, 00:14:05.465 "num_base_bdevs": 2, 00:14:05.465 "num_base_bdevs_discovered": 1, 00:14:05.465 "num_base_bdevs_operational": 1, 00:14:05.465 "base_bdevs_list": [ 00:14:05.465 { 00:14:05.465 "name": null, 00:14:05.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.465 "is_configured": false, 00:14:05.465 "data_offset": 0, 00:14:05.465 "data_size": 63488 00:14:05.465 }, 00:14:05.465 { 00:14:05.465 "name": "BaseBdev2", 00:14:05.465 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:05.465 "is_configured": true, 00:14:05.465 "data_offset": 2048, 00:14:05.465 "data_size": 63488 00:14:05.465 } 00:14:05.465 ] 00:14:05.465 }' 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.465 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.031 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.031 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.031 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.031 [2024-11-20 14:31:06.832739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.031 [2024-11-20 14:31:06.832837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.031 [2024-11-20 14:31:06.832878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:06.031 [2024-11-20 14:31:06.832898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.031 [2024-11-20 14:31:06.833630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.031 [2024-11-20 14:31:06.833709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.032 [2024-11-20 14:31:06.833843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:06.032 [2024-11-20 14:31:06.833871] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:06.032 [2024-11-20 14:31:06.833885] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:06.032 [2024-11-20 14:31:06.833931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.032 [2024-11-20 14:31:06.850439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:06.032 spare 00:14:06.032 14:31:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.032 14:31:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:06.032 [2024-11-20 14:31:06.853141] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.966 "name": "raid_bdev1", 00:14:06.966 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:06.966 "strip_size_kb": 0, 00:14:06.966 "state": "online", 00:14:06.966 "raid_level": "raid1", 00:14:06.966 "superblock": true, 00:14:06.966 "num_base_bdevs": 2, 00:14:06.966 "num_base_bdevs_discovered": 2, 00:14:06.966 "num_base_bdevs_operational": 2, 00:14:06.966 "process": { 00:14:06.966 "type": "rebuild", 00:14:06.966 "target": "spare", 00:14:06.966 "progress": { 00:14:06.966 "blocks": 20480, 00:14:06.966 "percent": 32 00:14:06.966 } 00:14:06.966 }, 00:14:06.966 "base_bdevs_list": [ 00:14:06.966 { 00:14:06.966 "name": "spare", 00:14:06.966 "uuid": "d9a338dc-07d6-5a9c-ba31-b697f834a6db", 00:14:06.966 "is_configured": true, 00:14:06.966 "data_offset": 2048, 00:14:06.966 "data_size": 63488 00:14:06.966 }, 00:14:06.966 { 00:14:06.966 "name": "BaseBdev2", 00:14:06.966 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:06.966 "is_configured": true, 00:14:06.966 "data_offset": 2048, 00:14:06.966 "data_size": 63488 00:14:06.966 } 00:14:06.966 ] 00:14:06.966 }' 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.966 14:31:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.966 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.966 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.966 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.966 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.966 [2024-11-20 14:31:08.018805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.224 [2024-11-20 14:31:08.062709] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:07.224 [2024-11-20 14:31:08.062818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.224 [2024-11-20 14:31:08.062848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.224 [2024-11-20 14:31:08.062861] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.224 "name": "raid_bdev1", 00:14:07.224 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:07.224 "strip_size_kb": 0, 00:14:07.224 "state": "online", 00:14:07.224 "raid_level": "raid1", 00:14:07.224 "superblock": true, 00:14:07.224 "num_base_bdevs": 2, 00:14:07.224 "num_base_bdevs_discovered": 1, 00:14:07.224 "num_base_bdevs_operational": 1, 00:14:07.224 "base_bdevs_list": [ 00:14:07.224 { 00:14:07.224 "name": null, 00:14:07.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.224 "is_configured": false, 00:14:07.224 "data_offset": 0, 00:14:07.224 "data_size": 63488 00:14:07.224 }, 00:14:07.224 { 00:14:07.224 "name": "BaseBdev2", 00:14:07.224 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:07.224 "is_configured": true, 00:14:07.224 "data_offset": 2048, 00:14:07.224 "data_size": 63488 00:14:07.224 } 00:14:07.224 ] 00:14:07.224 }' 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.224 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.790 "name": "raid_bdev1", 00:14:07.790 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:07.790 "strip_size_kb": 0, 00:14:07.790 "state": "online", 00:14:07.790 "raid_level": "raid1", 00:14:07.790 "superblock": true, 00:14:07.790 "num_base_bdevs": 2, 00:14:07.790 "num_base_bdevs_discovered": 1, 00:14:07.790 "num_base_bdevs_operational": 1, 00:14:07.790 "base_bdevs_list": [ 00:14:07.790 { 00:14:07.790 "name": null, 00:14:07.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.790 "is_configured": false, 00:14:07.790 "data_offset": 0, 00:14:07.790 "data_size": 63488 00:14:07.790 }, 00:14:07.790 { 00:14:07.790 "name": "BaseBdev2", 00:14:07.790 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:07.790 "is_configured": true, 00:14:07.790 "data_offset": 2048, 00:14:07.790 "data_size": 63488 00:14:07.790 } 00:14:07.790 ] 00:14:07.790 }' 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.790 [2024-11-20 14:31:08.796322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:07.790 [2024-11-20 14:31:08.796410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.790 [2024-11-20 14:31:08.796452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:07.790 [2024-11-20 14:31:08.796479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.790 [2024-11-20 14:31:08.797137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.790 [2024-11-20 14:31:08.797170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:07.790 [2024-11-20 14:31:08.797274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:07.790 [2024-11-20 14:31:08.797297] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:07.790 [2024-11-20 14:31:08.797310] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:07.790 [2024-11-20 14:31:08.797323] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:07.790 BaseBdev1 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.790 14:31:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.184 "name": "raid_bdev1", 00:14:09.184 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:09.184 "strip_size_kb": 0, 00:14:09.184 "state": "online", 00:14:09.184 "raid_level": "raid1", 00:14:09.184 "superblock": true, 00:14:09.184 "num_base_bdevs": 2, 00:14:09.184 "num_base_bdevs_discovered": 1, 00:14:09.184 "num_base_bdevs_operational": 1, 00:14:09.184 "base_bdevs_list": [ 00:14:09.184 { 00:14:09.184 "name": null, 00:14:09.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.184 "is_configured": false, 00:14:09.184 "data_offset": 0, 00:14:09.184 "data_size": 63488 00:14:09.184 }, 00:14:09.184 { 00:14:09.184 "name": "BaseBdev2", 00:14:09.184 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:09.184 "is_configured": true, 00:14:09.184 "data_offset": 2048, 00:14:09.184 "data_size": 63488 00:14:09.184 } 00:14:09.184 ] 00:14:09.184 }' 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.184 14:31:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.443 "name": "raid_bdev1", 00:14:09.443 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:09.443 "strip_size_kb": 0, 00:14:09.443 "state": "online", 00:14:09.443 "raid_level": "raid1", 00:14:09.443 "superblock": true, 00:14:09.443 "num_base_bdevs": 2, 00:14:09.443 "num_base_bdevs_discovered": 1, 00:14:09.443 "num_base_bdevs_operational": 1, 00:14:09.443 "base_bdevs_list": [ 00:14:09.443 { 00:14:09.443 "name": null, 00:14:09.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.443 "is_configured": false, 00:14:09.443 "data_offset": 0, 00:14:09.443 "data_size": 63488 00:14:09.443 }, 00:14:09.443 { 00:14:09.443 "name": "BaseBdev2", 00:14:09.443 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:09.443 "is_configured": true, 00:14:09.443 "data_offset": 2048, 00:14:09.443 "data_size": 63488 00:14:09.443 } 00:14:09.443 ] 00:14:09.443 }' 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.443 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.701 [2024-11-20 14:31:10.501037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.701 [2024-11-20 14:31:10.501275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:09.701 [2024-11-20 14:31:10.501307] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:09.701 request: 00:14:09.701 { 00:14:09.701 "base_bdev": "BaseBdev1", 00:14:09.701 "raid_bdev": "raid_bdev1", 00:14:09.701 "method": "bdev_raid_add_base_bdev", 00:14:09.701 "req_id": 1 00:14:09.701 } 00:14:09.701 Got JSON-RPC error response 00:14:09.701 response: 00:14:09.701 { 00:14:09.701 "code": -22, 00:14:09.701 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:09.701 } 00:14:09.701 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:09.701 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:09.701 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:09.701 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:09.701 14:31:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:09.701 14:31:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.636 "name": "raid_bdev1", 00:14:10.636 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:10.636 "strip_size_kb": 0, 00:14:10.636 "state": "online", 00:14:10.636 "raid_level": "raid1", 00:14:10.636 "superblock": true, 00:14:10.636 "num_base_bdevs": 2, 00:14:10.636 "num_base_bdevs_discovered": 1, 00:14:10.636 "num_base_bdevs_operational": 1, 00:14:10.636 "base_bdevs_list": [ 00:14:10.636 { 00:14:10.636 "name": null, 00:14:10.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.636 "is_configured": false, 00:14:10.636 "data_offset": 0, 00:14:10.636 "data_size": 63488 00:14:10.636 }, 00:14:10.636 { 00:14:10.636 "name": "BaseBdev2", 00:14:10.636 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:10.636 "is_configured": true, 00:14:10.636 "data_offset": 2048, 00:14:10.636 "data_size": 63488 00:14:10.636 } 00:14:10.636 ] 00:14:10.636 }' 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.636 14:31:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.204 "name": "raid_bdev1", 00:14:11.204 "uuid": "5a77243a-b937-4f70-9ca8-dd5bc5ce4b1f", 00:14:11.204 "strip_size_kb": 0, 00:14:11.204 "state": "online", 00:14:11.204 "raid_level": "raid1", 00:14:11.204 "superblock": true, 00:14:11.204 "num_base_bdevs": 2, 00:14:11.204 "num_base_bdevs_discovered": 1, 00:14:11.204 "num_base_bdevs_operational": 1, 00:14:11.204 "base_bdevs_list": [ 00:14:11.204 { 00:14:11.204 "name": null, 00:14:11.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.204 "is_configured": false, 00:14:11.204 "data_offset": 0, 00:14:11.204 "data_size": 63488 00:14:11.204 }, 00:14:11.204 { 00:14:11.204 "name": "BaseBdev2", 00:14:11.204 "uuid": "9883227a-cb2c-5d13-91fe-7f4c9a4e48d9", 00:14:11.204 "is_configured": true, 00:14:11.204 "data_offset": 2048, 00:14:11.204 "data_size": 63488 00:14:11.204 } 00:14:11.204 ] 00:14:11.204 }' 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76018 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76018 ']' 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76018 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.204 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76018 00:14:11.463 killing process with pid 76018 00:14:11.463 Received shutdown signal, test time was about 60.000000 seconds 00:14:11.463 00:14:11.463 Latency(us) 00:14:11.463 [2024-11-20T14:31:12.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.463 [2024-11-20T14:31:12.520Z] =================================================================================================================== 00:14:11.463 [2024-11-20T14:31:12.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:11.463 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.463 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.463 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76018' 00:14:11.463 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76018 00:14:11.463 [2024-11-20 14:31:12.266331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.463 14:31:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76018 00:14:11.463 [2024-11-20 14:31:12.266515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.463 [2024-11-20 14:31:12.266605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.463 [2024-11-20 14:31:12.266626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:11.722 [2024-11-20 14:31:12.547047] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.690 14:31:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:12.690 00:14:12.690 real 0m27.120s 00:14:12.690 user 0m33.398s 00:14:12.691 sys 0m4.084s 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.691 ************************************ 00:14:12.691 END TEST raid_rebuild_test_sb 00:14:12.691 ************************************ 00:14:12.691 14:31:13 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:12.691 14:31:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:12.691 14:31:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.691 14:31:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.691 ************************************ 00:14:12.691 START TEST raid_rebuild_test_io 00:14:12.691 ************************************ 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76787 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76787 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76787 ']' 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.691 14:31:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.949 [2024-11-20 14:31:13.810518] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:14:12.949 [2024-11-20 14:31:13.811018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76787 ] 00:14:12.949 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.949 Zero copy mechanism will not be used. 00:14:13.207 [2024-11-20 14:31:14.006216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.207 [2024-11-20 14:31:14.160583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.465 [2024-11-20 14:31:14.377440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.465 [2024-11-20 14:31:14.377816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 BaseBdev1_malloc 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 [2024-11-20 14:31:14.871437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:14.033 [2024-11-20 14:31:14.871533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.033 [2024-11-20 14:31:14.871567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:14.033 [2024-11-20 14:31:14.871586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.033 [2024-11-20 14:31:14.874407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.033 [2024-11-20 14:31:14.874458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:14.033 BaseBdev1 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 BaseBdev2_malloc 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 [2024-11-20 14:31:14.924577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:14.033 [2024-11-20 14:31:14.924671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.033 [2024-11-20 14:31:14.924718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:14.033 [2024-11-20 14:31:14.924749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.033 [2024-11-20 14:31:14.927588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.033 [2024-11-20 14:31:14.927700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:14.033 BaseBdev2 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 spare_malloc 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 spare_delay 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 [2024-11-20 14:31:14.994696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:14.033 [2024-11-20 14:31:14.994920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.033 [2024-11-20 14:31:14.994959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:14.033 [2024-11-20 14:31:14.994979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.033 [2024-11-20 14:31:14.997828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.033 [2024-11-20 14:31:14.997890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:14.033 spare 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 [2024-11-20 14:31:15.002792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.033 [2024-11-20 14:31:15.005196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.033 [2024-11-20 14:31:15.005452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:14.033 [2024-11-20 14:31:15.005482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:14.033 [2024-11-20 14:31:15.005831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:14.033 [2024-11-20 14:31:15.006053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:14.033 [2024-11-20 14:31:15.006073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:14.033 [2024-11-20 14:31:15.006264] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.033 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.034 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.034 "name": "raid_bdev1", 00:14:14.034 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:14.034 "strip_size_kb": 0, 00:14:14.034 "state": "online", 00:14:14.034 "raid_level": "raid1", 00:14:14.034 "superblock": false, 00:14:14.034 "num_base_bdevs": 2, 00:14:14.034 "num_base_bdevs_discovered": 2, 00:14:14.034 "num_base_bdevs_operational": 2, 00:14:14.034 "base_bdevs_list": [ 00:14:14.034 { 00:14:14.034 "name": "BaseBdev1", 00:14:14.034 "uuid": "d88ead49-2cd2-52d3-9dd8-8a6313355207", 00:14:14.034 "is_configured": true, 00:14:14.034 "data_offset": 0, 00:14:14.034 "data_size": 65536 00:14:14.034 }, 00:14:14.034 { 00:14:14.034 "name": "BaseBdev2", 00:14:14.034 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:14.034 "is_configured": true, 00:14:14.034 "data_offset": 0, 00:14:14.034 "data_size": 65536 00:14:14.034 } 00:14:14.034 ] 00:14:14.034 }' 00:14:14.034 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.034 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.601 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:14.601 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.601 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.602 [2024-11-20 14:31:15.495347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.602 [2024-11-20 14:31:15.598941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.602 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.861 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.861 "name": "raid_bdev1", 00:14:14.861 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:14.861 "strip_size_kb": 0, 00:14:14.861 "state": "online", 00:14:14.861 "raid_level": "raid1", 00:14:14.861 "superblock": false, 00:14:14.861 "num_base_bdevs": 2, 00:14:14.861 "num_base_bdevs_discovered": 1, 00:14:14.861 "num_base_bdevs_operational": 1, 00:14:14.861 "base_bdevs_list": [ 00:14:14.861 { 00:14:14.861 "name": null, 00:14:14.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.861 "is_configured": false, 00:14:14.861 "data_offset": 0, 00:14:14.861 "data_size": 65536 00:14:14.861 }, 00:14:14.861 { 00:14:14.861 "name": "BaseBdev2", 00:14:14.861 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:14.861 "is_configured": true, 00:14:14.861 "data_offset": 0, 00:14:14.861 "data_size": 65536 00:14:14.861 } 00:14:14.861 ] 00:14:14.861 }' 00:14:14.861 14:31:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.861 14:31:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.861 [2024-11-20 14:31:15.731183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:14.861 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:14.861 Zero copy mechanism will not be used. 00:14:14.861 Running I/O for 60 seconds... 00:14:15.120 14:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.120 14:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.120 14:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.120 [2024-11-20 14:31:16.125660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.120 14:31:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.120 14:31:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:15.120 [2024-11-20 14:31:16.173847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:15.378 [2024-11-20 14:31:16.176882] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.378 [2024-11-20 14:31:16.305200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.378 [2024-11-20 14:31:16.306267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:15.637 [2024-11-20 14:31:16.435118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.637 [2024-11-20 14:31:16.435818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:15.896 139.00 IOPS, 417.00 MiB/s [2024-11-20T14:31:16.953Z] [2024-11-20 14:31:16.801128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:15.896 [2024-11-20 14:31:16.927500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.155 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.413 "name": "raid_bdev1", 00:14:16.413 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:16.413 "strip_size_kb": 0, 00:14:16.413 "state": "online", 00:14:16.413 "raid_level": "raid1", 00:14:16.413 "superblock": false, 00:14:16.413 "num_base_bdevs": 2, 00:14:16.413 "num_base_bdevs_discovered": 2, 00:14:16.413 "num_base_bdevs_operational": 2, 00:14:16.413 "process": { 00:14:16.413 "type": "rebuild", 00:14:16.413 "target": "spare", 00:14:16.413 "progress": { 00:14:16.413 "blocks": 12288, 00:14:16.413 "percent": 18 00:14:16.413 } 00:14:16.413 }, 00:14:16.413 "base_bdevs_list": [ 00:14:16.413 { 00:14:16.413 "name": "spare", 00:14:16.413 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:16.413 "is_configured": true, 00:14:16.413 "data_offset": 0, 00:14:16.413 "data_size": 65536 00:14:16.413 }, 00:14:16.413 { 00:14:16.413 "name": "BaseBdev2", 00:14:16.413 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:16.413 "is_configured": true, 00:14:16.413 "data_offset": 0, 00:14:16.413 "data_size": 65536 00:14:16.413 } 00:14:16.413 ] 00:14:16.413 }' 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.413 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.413 [2024-11-20 14:31:17.323952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.413 [2024-11-20 14:31:17.398166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:16.671 [2024-11-20 14:31:17.508789] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.671 [2024-11-20 14:31:17.511673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.671 [2024-11-20 14:31:17.511741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.671 [2024-11-20 14:31:17.511759] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.671 [2024-11-20 14:31:17.563918] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.671 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.671 "name": "raid_bdev1", 00:14:16.671 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:16.671 "strip_size_kb": 0, 00:14:16.671 "state": "online", 00:14:16.671 "raid_level": "raid1", 00:14:16.671 "superblock": false, 00:14:16.671 "num_base_bdevs": 2, 00:14:16.671 "num_base_bdevs_discovered": 1, 00:14:16.672 "num_base_bdevs_operational": 1, 00:14:16.672 "base_bdevs_list": [ 00:14:16.672 { 00:14:16.672 "name": null, 00:14:16.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.672 "is_configured": false, 00:14:16.672 "data_offset": 0, 00:14:16.672 "data_size": 65536 00:14:16.672 }, 00:14:16.672 { 00:14:16.672 "name": "BaseBdev2", 00:14:16.672 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:16.672 "is_configured": true, 00:14:16.672 "data_offset": 0, 00:14:16.672 "data_size": 65536 00:14:16.672 } 00:14:16.672 ] 00:14:16.672 }' 00:14:16.672 14:31:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.672 14:31:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.254 121.00 IOPS, 363.00 MiB/s [2024-11-20T14:31:18.311Z] 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.254 "name": "raid_bdev1", 00:14:17.254 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:17.254 "strip_size_kb": 0, 00:14:17.254 "state": "online", 00:14:17.254 "raid_level": "raid1", 00:14:17.254 "superblock": false, 00:14:17.254 "num_base_bdevs": 2, 00:14:17.254 "num_base_bdevs_discovered": 1, 00:14:17.254 "num_base_bdevs_operational": 1, 00:14:17.254 "base_bdevs_list": [ 00:14:17.254 { 00:14:17.254 "name": null, 00:14:17.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.254 "is_configured": false, 00:14:17.254 "data_offset": 0, 00:14:17.254 "data_size": 65536 00:14:17.254 }, 00:14:17.254 { 00:14:17.254 "name": "BaseBdev2", 00:14:17.254 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:17.254 "is_configured": true, 00:14:17.254 "data_offset": 0, 00:14:17.254 "data_size": 65536 00:14:17.254 } 00:14:17.254 ] 00:14:17.254 }' 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.254 [2024-11-20 14:31:18.235285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.254 14:31:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:17.513 [2024-11-20 14:31:18.318830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:17.513 [2024-11-20 14:31:18.321350] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.513 [2024-11-20 14:31:18.454386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:17.772 [2024-11-20 14:31:18.574922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:17.772 [2024-11-20 14:31:18.575296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:17.772 145.67 IOPS, 437.00 MiB/s [2024-11-20T14:31:18.829Z] [2024-11-20 14:31:18.822112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.337 "name": "raid_bdev1", 00:14:18.337 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:18.337 "strip_size_kb": 0, 00:14:18.337 "state": "online", 00:14:18.337 "raid_level": "raid1", 00:14:18.337 "superblock": false, 00:14:18.337 "num_base_bdevs": 2, 00:14:18.337 "num_base_bdevs_discovered": 2, 00:14:18.337 "num_base_bdevs_operational": 2, 00:14:18.337 "process": { 00:14:18.337 "type": "rebuild", 00:14:18.337 "target": "spare", 00:14:18.337 "progress": { 00:14:18.337 "blocks": 14336, 00:14:18.337 "percent": 21 00:14:18.337 } 00:14:18.337 }, 00:14:18.337 "base_bdevs_list": [ 00:14:18.337 { 00:14:18.337 "name": "spare", 00:14:18.337 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:18.337 "is_configured": true, 00:14:18.337 "data_offset": 0, 00:14:18.337 "data_size": 65536 00:14:18.337 }, 00:14:18.337 { 00:14:18.337 "name": "BaseBdev2", 00:14:18.337 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:18.337 "is_configured": true, 00:14:18.337 "data_offset": 0, 00:14:18.337 "data_size": 65536 00:14:18.337 } 00:14:18.337 ] 00:14:18.337 }' 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.337 [2024-11-20 14:31:19.358224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:18.337 [2024-11-20 14:31:19.358661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.337 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=441 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.595 "name": "raid_bdev1", 00:14:18.595 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:18.595 "strip_size_kb": 0, 00:14:18.595 "state": "online", 00:14:18.595 "raid_level": "raid1", 00:14:18.595 "superblock": false, 00:14:18.595 "num_base_bdevs": 2, 00:14:18.595 "num_base_bdevs_discovered": 2, 00:14:18.595 "num_base_bdevs_operational": 2, 00:14:18.595 "process": { 00:14:18.595 "type": "rebuild", 00:14:18.595 "target": "spare", 00:14:18.595 "progress": { 00:14:18.595 "blocks": 16384, 00:14:18.595 "percent": 25 00:14:18.595 } 00:14:18.595 }, 00:14:18.595 "base_bdevs_list": [ 00:14:18.595 { 00:14:18.595 "name": "spare", 00:14:18.595 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:18.595 "is_configured": true, 00:14:18.595 "data_offset": 0, 00:14:18.595 "data_size": 65536 00:14:18.595 }, 00:14:18.595 { 00:14:18.595 "name": "BaseBdev2", 00:14:18.595 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:18.595 "is_configured": true, 00:14:18.595 "data_offset": 0, 00:14:18.595 "data_size": 65536 00:14:18.595 } 00:14:18.595 ] 00:14:18.595 }' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.595 14:31:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.854 [2024-11-20 14:31:19.674382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:18.854 131.50 IOPS, 394.50 MiB/s [2024-11-20T14:31:19.911Z] [2024-11-20 14:31:19.792230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:19.112 [2024-11-20 14:31:20.123539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:19.370 [2024-11-20 14:31:20.276974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.629 "name": "raid_bdev1", 00:14:19.629 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:19.629 "strip_size_kb": 0, 00:14:19.629 "state": "online", 00:14:19.629 "raid_level": "raid1", 00:14:19.629 "superblock": false, 00:14:19.629 "num_base_bdevs": 2, 00:14:19.629 "num_base_bdevs_discovered": 2, 00:14:19.629 "num_base_bdevs_operational": 2, 00:14:19.629 "process": { 00:14:19.629 "type": "rebuild", 00:14:19.629 "target": "spare", 00:14:19.629 "progress": { 00:14:19.629 "blocks": 32768, 00:14:19.629 "percent": 50 00:14:19.629 } 00:14:19.629 }, 00:14:19.629 "base_bdevs_list": [ 00:14:19.629 { 00:14:19.629 "name": "spare", 00:14:19.629 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:19.629 "is_configured": true, 00:14:19.629 "data_offset": 0, 00:14:19.629 "data_size": 65536 00:14:19.629 }, 00:14:19.629 { 00:14:19.629 "name": "BaseBdev2", 00:14:19.629 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:19.629 "is_configured": true, 00:14:19.629 "data_offset": 0, 00:14:19.629 "data_size": 65536 00:14:19.629 } 00:14:19.629 ] 00:14:19.629 }' 00:14:19.629 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.629 [2024-11-20 14:31:20.649922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:19.629 [2024-11-20 14:31:20.650292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:19.887 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.887 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.887 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.887 14:31:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.819 115.00 IOPS, 345.00 MiB/s [2024-11-20T14:31:21.876Z] 102.33 IOPS, 307.00 MiB/s [2024-11-20T14:31:21.876Z] 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.819 "name": "raid_bdev1", 00:14:20.819 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:20.819 "strip_size_kb": 0, 00:14:20.819 "state": "online", 00:14:20.819 "raid_level": "raid1", 00:14:20.819 "superblock": false, 00:14:20.819 "num_base_bdevs": 2, 00:14:20.819 "num_base_bdevs_discovered": 2, 00:14:20.819 "num_base_bdevs_operational": 2, 00:14:20.819 "process": { 00:14:20.819 "type": "rebuild", 00:14:20.819 "target": "spare", 00:14:20.819 "progress": { 00:14:20.819 "blocks": 53248, 00:14:20.819 "percent": 81 00:14:20.819 } 00:14:20.819 }, 00:14:20.819 "base_bdevs_list": [ 00:14:20.819 { 00:14:20.819 "name": "spare", 00:14:20.819 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:20.819 "is_configured": true, 00:14:20.819 "data_offset": 0, 00:14:20.819 "data_size": 65536 00:14:20.819 }, 00:14:20.819 { 00:14:20.819 "name": "BaseBdev2", 00:14:20.819 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:20.819 "is_configured": true, 00:14:20.819 "data_offset": 0, 00:14:20.819 "data_size": 65536 00:14:20.819 } 00:14:20.819 ] 00:14:20.819 }' 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.819 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.078 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.078 14:31:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.078 [2024-11-20 14:31:21.919424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:21.356 [2024-11-20 14:31:22.361028] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:21.356 [2024-11-20 14:31:22.374238] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:21.356 [2024-11-20 14:31:22.384746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.930 92.86 IOPS, 278.57 MiB/s [2024-11-20T14:31:22.987Z] 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.930 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.930 "name": "raid_bdev1", 00:14:21.930 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:21.930 "strip_size_kb": 0, 00:14:21.930 "state": "online", 00:14:21.930 "raid_level": "raid1", 00:14:21.930 "superblock": false, 00:14:21.930 "num_base_bdevs": 2, 00:14:21.930 "num_base_bdevs_discovered": 2, 00:14:21.930 "num_base_bdevs_operational": 2, 00:14:21.930 "base_bdevs_list": [ 00:14:21.930 { 00:14:21.930 "name": "spare", 00:14:21.930 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:21.930 "is_configured": true, 00:14:21.930 "data_offset": 0, 00:14:21.931 "data_size": 65536 00:14:21.931 }, 00:14:21.931 { 00:14:21.931 "name": "BaseBdev2", 00:14:21.931 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:21.931 "is_configured": true, 00:14:21.931 "data_offset": 0, 00:14:21.931 "data_size": 65536 00:14:21.931 } 00:14:21.931 ] 00:14:21.931 }' 00:14:21.931 14:31:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.189 "name": "raid_bdev1", 00:14:22.189 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:22.189 "strip_size_kb": 0, 00:14:22.189 "state": "online", 00:14:22.189 "raid_level": "raid1", 00:14:22.189 "superblock": false, 00:14:22.189 "num_base_bdevs": 2, 00:14:22.189 "num_base_bdevs_discovered": 2, 00:14:22.189 "num_base_bdevs_operational": 2, 00:14:22.189 "base_bdevs_list": [ 00:14:22.189 { 00:14:22.189 "name": "spare", 00:14:22.189 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:22.189 "is_configured": true, 00:14:22.189 "data_offset": 0, 00:14:22.189 "data_size": 65536 00:14:22.189 }, 00:14:22.189 { 00:14:22.189 "name": "BaseBdev2", 00:14:22.189 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:22.189 "is_configured": true, 00:14:22.189 "data_offset": 0, 00:14:22.189 "data_size": 65536 00:14:22.189 } 00:14:22.189 ] 00:14:22.189 }' 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.447 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.447 "name": "raid_bdev1", 00:14:22.447 "uuid": "52e1472c-e104-46a2-ba3f-8e9f55dda7fd", 00:14:22.447 "strip_size_kb": 0, 00:14:22.447 "state": "online", 00:14:22.447 "raid_level": "raid1", 00:14:22.447 "superblock": false, 00:14:22.447 "num_base_bdevs": 2, 00:14:22.447 "num_base_bdevs_discovered": 2, 00:14:22.447 "num_base_bdevs_operational": 2, 00:14:22.447 "base_bdevs_list": [ 00:14:22.447 { 00:14:22.447 "name": "spare", 00:14:22.447 "uuid": "c654d207-ea75-5698-813e-25ce63bdc90a", 00:14:22.447 "is_configured": true, 00:14:22.447 "data_offset": 0, 00:14:22.447 "data_size": 65536 00:14:22.447 }, 00:14:22.447 { 00:14:22.447 "name": "BaseBdev2", 00:14:22.447 "uuid": "1f171f90-7abd-55f3-94d9-d0ec12c1eab0", 00:14:22.447 "is_configured": true, 00:14:22.447 "data_offset": 0, 00:14:22.447 "data_size": 65536 00:14:22.447 } 00:14:22.447 ] 00:14:22.447 }' 00:14:22.447 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.447 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.964 85.62 IOPS, 256.88 MiB/s [2024-11-20T14:31:24.021Z] 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.964 [2024-11-20 14:31:23.776955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:22.964 [2024-11-20 14:31:23.777169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.964 00:14:22.964 Latency(us) 00:14:22.964 [2024-11-20T14:31:24.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.964 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:22.964 raid_bdev1 : 8.11 84.67 254.02 0.00 0.00 16533.84 292.31 117726.49 00:14:22.964 [2024-11-20T14:31:24.021Z] =================================================================================================================== 00:14:22.964 [2024-11-20T14:31:24.021Z] Total : 84.67 254.02 0.00 0.00 16533.84 292.31 117726.49 00:14:22.964 [2024-11-20 14:31:23.866156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.964 [2024-11-20 14:31:23.866208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.964 [2024-11-20 14:31:23.866315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:22.964 [2024-11-20 14:31:23.866418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:22.964 { 00:14:22.964 "results": [ 00:14:22.964 { 00:14:22.964 "job": "raid_bdev1", 00:14:22.964 "core_mask": "0x1", 00:14:22.964 "workload": "randrw", 00:14:22.964 "percentage": 50, 00:14:22.964 "status": "finished", 00:14:22.964 "queue_depth": 2, 00:14:22.964 "io_size": 3145728, 00:14:22.964 "runtime": 8.113565, 00:14:22.964 "iops": 84.67301365059626, 00:14:22.964 "mibps": 254.01904095178878, 00:14:22.964 "io_failed": 0, 00:14:22.964 "io_timeout": 0, 00:14:22.964 "avg_latency_us": 16533.84113272463, 00:14:22.964 "min_latency_us": 292.30545454545455, 00:14:22.964 "max_latency_us": 117726.48727272727 00:14:22.964 } 00:14:22.964 ], 00:14:22.964 "core_count": 1 00:14:22.964 } 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.964 14:31:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:23.222 /dev/nbd0 00:14:23.222 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.222 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.222 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:23.222 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:23.222 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.222 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.222 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.481 1+0 records in 00:14:23.481 1+0 records out 00:14:23.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267612 s, 15.3 MB/s 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.481 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:23.739 /dev/nbd1 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.739 1+0 records in 00:14:23.739 1+0 records out 00:14:23.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401027 s, 10.2 MB/s 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.739 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:23.997 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:23.997 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.997 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:23.997 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.997 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.997 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.997 14:31:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.255 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76787 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76787 ']' 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76787 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76787 00:14:24.513 killing process with pid 76787 00:14:24.513 Received shutdown signal, test time was about 9.690891 seconds 00:14:24.513 00:14:24.513 Latency(us) 00:14:24.513 [2024-11-20T14:31:25.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.513 [2024-11-20T14:31:25.570Z] =================================================================================================================== 00:14:24.513 [2024-11-20T14:31:25.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76787' 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76787 00:14:24.513 14:31:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76787 00:14:24.513 [2024-11-20 14:31:25.424994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.772 [2024-11-20 14:31:25.635580] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:26.147 00:14:26.147 real 0m13.121s 00:14:26.147 user 0m17.107s 00:14:26.147 sys 0m1.417s 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.147 ************************************ 00:14:26.147 END TEST raid_rebuild_test_io 00:14:26.147 ************************************ 00:14:26.147 14:31:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:26.147 14:31:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:26.147 14:31:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.147 14:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.147 ************************************ 00:14:26.147 START TEST raid_rebuild_test_sb_io 00:14:26.147 ************************************ 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77173 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77173 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77173 ']' 00:14:26.147 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.148 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.148 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.148 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.148 14:31:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.148 [2024-11-20 14:31:26.981911] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:14:26.148 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:26.148 Zero copy mechanism will not be used. 00:14:26.148 [2024-11-20 14:31:26.982284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77173 ] 00:14:26.148 [2024-11-20 14:31:27.167455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.406 [2024-11-20 14:31:27.348972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.664 [2024-11-20 14:31:27.579902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.664 [2024-11-20 14:31:27.580188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.923 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.923 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:26.923 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.923 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:26.923 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.923 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 BaseBdev1_malloc 00:14:27.182 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:27.182 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 [2024-11-20 14:31:28.003826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:27.182 [2024-11-20 14:31:28.003930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.182 [2024-11-20 14:31:28.003965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.182 [2024-11-20 14:31:28.003984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.182 [2024-11-20 14:31:28.007097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.182 [2024-11-20 14:31:28.007145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.182 BaseBdev1 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 BaseBdev2_malloc 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 [2024-11-20 14:31:28.061032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:27.182 [2024-11-20 14:31:28.061143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.182 [2024-11-20 14:31:28.061178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.182 [2024-11-20 14:31:28.061196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.182 [2024-11-20 14:31:28.064029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.182 [2024-11-20 14:31:28.064229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:27.182 BaseBdev2 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 spare_malloc 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 spare_delay 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 [2024-11-20 14:31:28.142688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.182 [2024-11-20 14:31:28.142782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.182 [2024-11-20 14:31:28.142820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:27.182 [2024-11-20 14:31:28.142839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.182 [2024-11-20 14:31:28.145859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.182 [2024-11-20 14:31:28.145911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.182 spare 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 [2024-11-20 14:31:28.154944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.182 [2024-11-20 14:31:28.157730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.182 [2024-11-20 14:31:28.158129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:27.182 [2024-11-20 14:31:28.158284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.182 [2024-11-20 14:31:28.158710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:27.182 [2024-11-20 14:31:28.159079] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:27.182 [2024-11-20 14:31:28.159210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:27.182 [2024-11-20 14:31:28.159612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.182 "name": "raid_bdev1", 00:14:27.182 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:27.182 "strip_size_kb": 0, 00:14:27.182 "state": "online", 00:14:27.182 "raid_level": "raid1", 00:14:27.182 "superblock": true, 00:14:27.182 "num_base_bdevs": 2, 00:14:27.182 "num_base_bdevs_discovered": 2, 00:14:27.182 "num_base_bdevs_operational": 2, 00:14:27.182 "base_bdevs_list": [ 00:14:27.182 { 00:14:27.182 "name": "BaseBdev1", 00:14:27.182 "uuid": "134f2849-3256-54b7-bfff-16df22b917b0", 00:14:27.182 "is_configured": true, 00:14:27.182 "data_offset": 2048, 00:14:27.182 "data_size": 63488 00:14:27.182 }, 00:14:27.182 { 00:14:27.182 "name": "BaseBdev2", 00:14:27.182 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:27.182 "is_configured": true, 00:14:27.182 "data_offset": 2048, 00:14:27.182 "data_size": 63488 00:14:27.182 } 00:14:27.182 ] 00:14:27.182 }' 00:14:27.182 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.183 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.748 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:27.748 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.748 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.748 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.748 [2024-11-20 14:31:28.680101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.748 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.748 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:27.748 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.749 [2024-11-20 14:31:28.775725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.749 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.007 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.007 "name": "raid_bdev1", 00:14:28.007 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:28.007 "strip_size_kb": 0, 00:14:28.007 "state": "online", 00:14:28.007 "raid_level": "raid1", 00:14:28.007 "superblock": true, 00:14:28.007 "num_base_bdevs": 2, 00:14:28.007 "num_base_bdevs_discovered": 1, 00:14:28.007 "num_base_bdevs_operational": 1, 00:14:28.007 "base_bdevs_list": [ 00:14:28.007 { 00:14:28.007 "name": null, 00:14:28.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.007 "is_configured": false, 00:14:28.007 "data_offset": 0, 00:14:28.007 "data_size": 63488 00:14:28.007 }, 00:14:28.007 { 00:14:28.007 "name": "BaseBdev2", 00:14:28.007 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:28.007 "is_configured": true, 00:14:28.007 "data_offset": 2048, 00:14:28.007 "data_size": 63488 00:14:28.007 } 00:14:28.007 ] 00:14:28.007 }' 00:14:28.007 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.007 14:31:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.007 [2024-11-20 14:31:28.907939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:28.007 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:28.007 Zero copy mechanism will not be used. 00:14:28.007 Running I/O for 60 seconds... 00:14:28.266 14:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.266 14:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.266 14:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.266 [2024-11-20 14:31:29.305529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.524 14:31:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.525 14:31:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:28.525 [2024-11-20 14:31:29.391006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:28.525 [2024-11-20 14:31:29.393578] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.525 [2024-11-20 14:31:29.520263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:28.782 [2024-11-20 14:31:29.639668] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:28.783 [2024-11-20 14:31:29.640317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:29.041 [2024-11-20 14:31:29.871851] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:29.041 [2024-11-20 14:31:29.882382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:29.299 163.00 IOPS, 489.00 MiB/s [2024-11-20T14:31:30.356Z] [2024-11-20 14:31:30.113031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.575 [2024-11-20 14:31:30.379120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.575 "name": "raid_bdev1", 00:14:29.575 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:29.575 "strip_size_kb": 0, 00:14:29.575 "state": "online", 00:14:29.575 "raid_level": "raid1", 00:14:29.575 "superblock": true, 00:14:29.575 "num_base_bdevs": 2, 00:14:29.575 "num_base_bdevs_discovered": 2, 00:14:29.575 "num_base_bdevs_operational": 2, 00:14:29.575 "process": { 00:14:29.575 "type": "rebuild", 00:14:29.575 "target": "spare", 00:14:29.575 "progress": { 00:14:29.575 "blocks": 12288, 00:14:29.575 "percent": 19 00:14:29.575 } 00:14:29.575 }, 00:14:29.575 "base_bdevs_list": [ 00:14:29.575 { 00:14:29.575 "name": "spare", 00:14:29.575 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:29.575 "is_configured": true, 00:14:29.575 "data_offset": 2048, 00:14:29.575 "data_size": 63488 00:14:29.575 }, 00:14:29.575 { 00:14:29.575 "name": "BaseBdev2", 00:14:29.575 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:29.575 "is_configured": true, 00:14:29.575 "data_offset": 2048, 00:14:29.575 "data_size": 63488 00:14:29.575 } 00:14:29.575 ] 00:14:29.575 }' 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.575 [2024-11-20 14:31:30.483530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.575 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.575 [2024-11-20 14:31:30.521825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.575 [2024-11-20 14:31:30.595766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:29.575 [2024-11-20 14:31:30.604915] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.575 [2024-11-20 14:31:30.615084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.575 [2024-11-20 14:31:30.615137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.575 [2024-11-20 14:31:30.615155] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.834 [2024-11-20 14:31:30.659449] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.834 "name": "raid_bdev1", 00:14:29.834 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:29.834 "strip_size_kb": 0, 00:14:29.834 "state": "online", 00:14:29.834 "raid_level": "raid1", 00:14:29.834 "superblock": true, 00:14:29.834 "num_base_bdevs": 2, 00:14:29.834 "num_base_bdevs_discovered": 1, 00:14:29.834 "num_base_bdevs_operational": 1, 00:14:29.834 "base_bdevs_list": [ 00:14:29.834 { 00:14:29.834 "name": null, 00:14:29.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.834 "is_configured": false, 00:14:29.834 "data_offset": 0, 00:14:29.834 "data_size": 63488 00:14:29.834 }, 00:14:29.834 { 00:14:29.834 "name": "BaseBdev2", 00:14:29.834 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:29.834 "is_configured": true, 00:14:29.834 "data_offset": 2048, 00:14:29.834 "data_size": 63488 00:14:29.834 } 00:14:29.834 ] 00:14:29.834 }' 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.834 14:31:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.350 150.50 IOPS, 451.50 MiB/s [2024-11-20T14:31:31.407Z] 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.350 "name": "raid_bdev1", 00:14:30.350 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:30.350 "strip_size_kb": 0, 00:14:30.350 "state": "online", 00:14:30.350 "raid_level": "raid1", 00:14:30.350 "superblock": true, 00:14:30.350 "num_base_bdevs": 2, 00:14:30.350 "num_base_bdevs_discovered": 1, 00:14:30.350 "num_base_bdevs_operational": 1, 00:14:30.350 "base_bdevs_list": [ 00:14:30.350 { 00:14:30.350 "name": null, 00:14:30.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.350 "is_configured": false, 00:14:30.350 "data_offset": 0, 00:14:30.350 "data_size": 63488 00:14:30.350 }, 00:14:30.350 { 00:14:30.350 "name": "BaseBdev2", 00:14:30.350 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:30.350 "is_configured": true, 00:14:30.350 "data_offset": 2048, 00:14:30.350 "data_size": 63488 00:14:30.350 } 00:14:30.350 ] 00:14:30.350 }' 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.350 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.350 [2024-11-20 14:31:31.369199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.608 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.608 14:31:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:30.608 [2024-11-20 14:31:31.446939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:30.608 [2024-11-20 14:31:31.449971] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.608 [2024-11-20 14:31:31.559755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.608 [2024-11-20 14:31:31.560827] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.865 [2024-11-20 14:31:31.781594] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:30.865 [2024-11-20 14:31:31.782142] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:31.123 155.00 IOPS, 465.00 MiB/s [2024-11-20T14:31:32.180Z] [2024-11-20 14:31:32.107254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:31.123 [2024-11-20 14:31:32.108174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:31.381 [2024-11-20 14:31:32.254759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.381 [2024-11-20 14:31:32.255211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.381 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.654 "name": "raid_bdev1", 00:14:31.654 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:31.654 "strip_size_kb": 0, 00:14:31.654 "state": "online", 00:14:31.654 "raid_level": "raid1", 00:14:31.654 "superblock": true, 00:14:31.654 "num_base_bdevs": 2, 00:14:31.654 "num_base_bdevs_discovered": 2, 00:14:31.654 "num_base_bdevs_operational": 2, 00:14:31.654 "process": { 00:14:31.654 "type": "rebuild", 00:14:31.654 "target": "spare", 00:14:31.654 "progress": { 00:14:31.654 "blocks": 12288, 00:14:31.654 "percent": 19 00:14:31.654 } 00:14:31.654 }, 00:14:31.654 "base_bdevs_list": [ 00:14:31.654 { 00:14:31.654 "name": "spare", 00:14:31.654 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:31.654 "is_configured": true, 00:14:31.654 "data_offset": 2048, 00:14:31.654 "data_size": 63488 00:14:31.654 }, 00:14:31.654 { 00:14:31.654 "name": "BaseBdev2", 00:14:31.654 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:31.654 "is_configured": true, 00:14:31.654 "data_offset": 2048, 00:14:31.654 "data_size": 63488 00:14:31.654 } 00:14:31.654 ] 00:14:31.654 }' 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.654 [2024-11-20 14:31:32.516404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:31.654 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:31.654 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=454 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.655 "name": "raid_bdev1", 00:14:31.655 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:31.655 "strip_size_kb": 0, 00:14:31.655 "state": "online", 00:14:31.655 "raid_level": "raid1", 00:14:31.655 "superblock": true, 00:14:31.655 "num_base_bdevs": 2, 00:14:31.655 "num_base_bdevs_discovered": 2, 00:14:31.655 "num_base_bdevs_operational": 2, 00:14:31.655 "process": { 00:14:31.655 "type": "rebuild", 00:14:31.655 "target": "spare", 00:14:31.655 "progress": { 00:14:31.655 "blocks": 14336, 00:14:31.655 "percent": 22 00:14:31.655 } 00:14:31.655 }, 00:14:31.655 "base_bdevs_list": [ 00:14:31.655 { 00:14:31.655 "name": "spare", 00:14:31.655 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:31.655 "is_configured": true, 00:14:31.655 "data_offset": 2048, 00:14:31.655 "data_size": 63488 00:14:31.655 }, 00:14:31.655 { 00:14:31.655 "name": "BaseBdev2", 00:14:31.655 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:31.655 "is_configured": true, 00:14:31.655 "data_offset": 2048, 00:14:31.655 "data_size": 63488 00:14:31.655 } 00:14:31.655 ] 00:14:31.655 }' 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.655 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.914 [2024-11-20 14:31:32.727184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.914 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.914 14:31:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.914 136.25 IOPS, 408.75 MiB/s [2024-11-20T14:31:32.971Z] [2024-11-20 14:31:32.954065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:32.172 [2024-11-20 14:31:33.073942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:32.738 [2024-11-20 14:31:33.522795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.738 [2024-11-20 14:31:33.741897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:32.738 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.996 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.996 "name": "raid_bdev1", 00:14:32.996 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:32.996 "strip_size_kb": 0, 00:14:32.996 "state": "online", 00:14:32.996 "raid_level": "raid1", 00:14:32.996 "superblock": true, 00:14:32.996 "num_base_bdevs": 2, 00:14:32.996 "num_base_bdevs_discovered": 2, 00:14:32.996 "num_base_bdevs_operational": 2, 00:14:32.996 "process": { 00:14:32.996 "type": "rebuild", 00:14:32.996 "target": "spare", 00:14:32.996 "progress": { 00:14:32.996 "blocks": 32768, 00:14:32.996 "percent": 51 00:14:32.996 } 00:14:32.996 }, 00:14:32.996 "base_bdevs_list": [ 00:14:32.996 { 00:14:32.996 "name": "spare", 00:14:32.996 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:32.996 "is_configured": true, 00:14:32.996 "data_offset": 2048, 00:14:32.996 "data_size": 63488 00:14:32.996 }, 00:14:32.996 { 00:14:32.996 "name": "BaseBdev2", 00:14:32.996 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:32.996 "is_configured": true, 00:14:32.996 "data_offset": 2048, 00:14:32.996 "data_size": 63488 00:14:32.996 } 00:14:32.996 ] 00:14:32.996 }' 00:14:32.996 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.996 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.996 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.996 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.996 14:31:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.996 121.00 IOPS, 363.00 MiB/s [2024-11-20T14:31:34.053Z] [2024-11-20 14:31:33.953158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:33.254 [2024-11-20 14:31:34.300956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.850 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.111 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.111 107.50 IOPS, 322.50 MiB/s [2024-11-20T14:31:35.168Z] 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.111 "name": "raid_bdev1", 00:14:34.111 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:34.111 "strip_size_kb": 0, 00:14:34.111 "state": "online", 00:14:34.111 "raid_level": "raid1", 00:14:34.111 "superblock": true, 00:14:34.111 "num_base_bdevs": 2, 00:14:34.111 "num_base_bdevs_discovered": 2, 00:14:34.111 "num_base_bdevs_operational": 2, 00:14:34.111 "process": { 00:14:34.111 "type": "rebuild", 00:14:34.111 "target": "spare", 00:14:34.111 "progress": { 00:14:34.111 "blocks": 49152, 00:14:34.111 "percent": 77 00:14:34.111 } 00:14:34.111 }, 00:14:34.111 "base_bdevs_list": [ 00:14:34.111 { 00:14:34.111 "name": "spare", 00:14:34.111 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:34.111 "is_configured": true, 00:14:34.111 "data_offset": 2048, 00:14:34.111 "data_size": 63488 00:14:34.111 }, 00:14:34.111 { 00:14:34.111 "name": "BaseBdev2", 00:14:34.111 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:34.111 "is_configured": true, 00:14:34.111 "data_offset": 2048, 00:14:34.111 "data_size": 63488 00:14:34.111 } 00:14:34.111 ] 00:14:34.111 }' 00:14:34.111 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.111 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.111 14:31:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.111 14:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.111 14:31:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.369 [2024-11-20 14:31:35.241139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:34.627 [2024-11-20 14:31:35.674875] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:34.885 [2024-11-20 14:31:35.696619] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:34.885 [2024-11-20 14:31:35.699147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.143 98.14 IOPS, 294.43 MiB/s [2024-11-20T14:31:36.200Z] 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.143 "name": "raid_bdev1", 00:14:35.143 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:35.143 "strip_size_kb": 0, 00:14:35.143 "state": "online", 00:14:35.143 "raid_level": "raid1", 00:14:35.143 "superblock": true, 00:14:35.143 "num_base_bdevs": 2, 00:14:35.143 "num_base_bdevs_discovered": 2, 00:14:35.143 "num_base_bdevs_operational": 2, 00:14:35.143 "base_bdevs_list": [ 00:14:35.143 { 00:14:35.143 "name": "spare", 00:14:35.143 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:35.143 "is_configured": true, 00:14:35.143 "data_offset": 2048, 00:14:35.143 "data_size": 63488 00:14:35.143 }, 00:14:35.143 { 00:14:35.143 "name": "BaseBdev2", 00:14:35.143 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:35.143 "is_configured": true, 00:14:35.143 "data_offset": 2048, 00:14:35.143 "data_size": 63488 00:14:35.143 } 00:14:35.143 ] 00:14:35.143 }' 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:35.143 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.402 "name": "raid_bdev1", 00:14:35.402 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:35.402 "strip_size_kb": 0, 00:14:35.402 "state": "online", 00:14:35.402 "raid_level": "raid1", 00:14:35.402 "superblock": true, 00:14:35.402 "num_base_bdevs": 2, 00:14:35.402 "num_base_bdevs_discovered": 2, 00:14:35.402 "num_base_bdevs_operational": 2, 00:14:35.402 "base_bdevs_list": [ 00:14:35.402 { 00:14:35.402 "name": "spare", 00:14:35.402 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:35.402 "is_configured": true, 00:14:35.402 "data_offset": 2048, 00:14:35.402 "data_size": 63488 00:14:35.402 }, 00:14:35.402 { 00:14:35.402 "name": "BaseBdev2", 00:14:35.402 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:35.402 "is_configured": true, 00:14:35.402 "data_offset": 2048, 00:14:35.402 "data_size": 63488 00:14:35.402 } 00:14:35.402 ] 00:14:35.402 }' 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.402 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.402 "name": "raid_bdev1", 00:14:35.402 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:35.402 "strip_size_kb": 0, 00:14:35.402 "state": "online", 00:14:35.402 "raid_level": "raid1", 00:14:35.402 "superblock": true, 00:14:35.402 "num_base_bdevs": 2, 00:14:35.402 "num_base_bdevs_discovered": 2, 00:14:35.402 "num_base_bdevs_operational": 2, 00:14:35.402 "base_bdevs_list": [ 00:14:35.402 { 00:14:35.402 "name": "spare", 00:14:35.402 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:35.402 "is_configured": true, 00:14:35.403 "data_offset": 2048, 00:14:35.403 "data_size": 63488 00:14:35.403 }, 00:14:35.403 { 00:14:35.403 "name": "BaseBdev2", 00:14:35.403 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:35.403 "is_configured": true, 00:14:35.403 "data_offset": 2048, 00:14:35.403 "data_size": 63488 00:14:35.403 } 00:14:35.403 ] 00:14:35.403 }' 00:14:35.403 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.403 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.969 [2024-11-20 14:31:36.882306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.969 [2024-11-20 14:31:36.882395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.969 91.50 IOPS, 274.50 MiB/s 00:14:35.969 Latency(us) 00:14:35.969 [2024-11-20T14:31:37.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.969 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:35.969 raid_bdev1 : 8.05 91.17 273.50 0.00 0.00 14483.98 286.72 118203.11 00:14:35.969 [2024-11-20T14:31:37.026Z] =================================================================================================================== 00:14:35.969 [2024-11-20T14:31:37.026Z] Total : 91.17 273.50 0.00 0.00 14483.98 286.72 118203.11 00:14:35.969 [2024-11-20 14:31:36.982132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.969 { 00:14:35.969 "results": [ 00:14:35.969 { 00:14:35.969 "job": "raid_bdev1", 00:14:35.969 "core_mask": "0x1", 00:14:35.969 "workload": "randrw", 00:14:35.969 "percentage": 50, 00:14:35.969 "status": "finished", 00:14:35.969 "queue_depth": 2, 00:14:35.969 "io_size": 3145728, 00:14:35.969 "runtime": 8.051196, 00:14:35.969 "iops": 91.16657947465197, 00:14:35.969 "mibps": 273.49973842395593, 00:14:35.969 "io_failed": 0, 00:14:35.969 "io_timeout": 0, 00:14:35.969 "avg_latency_us": 14483.980302204605, 00:14:35.969 "min_latency_us": 286.72, 00:14:35.969 "max_latency_us": 118203.11272727273 00:14:35.969 } 00:14:35.969 ], 00:14:35.969 "core_count": 1 00:14:35.969 } 00:14:35.969 [2024-11-20 14:31:36.982587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.969 [2024-11-20 14:31:36.982773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.969 [2024-11-20 14:31:36.982795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.969 14:31:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.969 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.228 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:36.490 /dev/nbd0 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.490 1+0 records in 00:14:36.490 1+0 records out 00:14:36.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276204 s, 14.8 MB/s 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.490 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:36.749 /dev/nbd1 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.749 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.749 1+0 records in 00:14:36.749 1+0 records out 00:14:36.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322484 s, 12.7 MB/s 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.750 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:37.008 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:37.008 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.008 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:37.008 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.008 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:37.008 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.008 14:31:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.267 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.525 [2024-11-20 14:31:38.543062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.525 [2024-11-20 14:31:38.543131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.525 [2024-11-20 14:31:38.543166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:37.525 [2024-11-20 14:31:38.543181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.525 [2024-11-20 14:31:38.546296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.525 [2024-11-20 14:31:38.546540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.525 [2024-11-20 14:31:38.546799] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:37.525 [2024-11-20 14:31:38.546884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.525 [2024-11-20 14:31:38.547092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.525 spare 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.525 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.783 [2024-11-20 14:31:38.647286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:37.783 [2024-11-20 14:31:38.647376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.783 [2024-11-20 14:31:38.647845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:37.783 [2024-11-20 14:31:38.648179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:37.783 [2024-11-20 14:31:38.648205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:37.783 [2024-11-20 14:31:38.648449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.783 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.783 "name": "raid_bdev1", 00:14:37.783 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:37.783 "strip_size_kb": 0, 00:14:37.783 "state": "online", 00:14:37.783 "raid_level": "raid1", 00:14:37.783 "superblock": true, 00:14:37.783 "num_base_bdevs": 2, 00:14:37.783 "num_base_bdevs_discovered": 2, 00:14:37.783 "num_base_bdevs_operational": 2, 00:14:37.783 "base_bdevs_list": [ 00:14:37.783 { 00:14:37.783 "name": "spare", 00:14:37.783 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:37.783 "is_configured": true, 00:14:37.783 "data_offset": 2048, 00:14:37.783 "data_size": 63488 00:14:37.783 }, 00:14:37.783 { 00:14:37.783 "name": "BaseBdev2", 00:14:37.783 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:37.784 "is_configured": true, 00:14:37.784 "data_offset": 2048, 00:14:37.784 "data_size": 63488 00:14:37.784 } 00:14:37.784 ] 00:14:37.784 }' 00:14:37.784 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.784 14:31:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.350 "name": "raid_bdev1", 00:14:38.350 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:38.350 "strip_size_kb": 0, 00:14:38.350 "state": "online", 00:14:38.350 "raid_level": "raid1", 00:14:38.350 "superblock": true, 00:14:38.350 "num_base_bdevs": 2, 00:14:38.350 "num_base_bdevs_discovered": 2, 00:14:38.350 "num_base_bdevs_operational": 2, 00:14:38.350 "base_bdevs_list": [ 00:14:38.350 { 00:14:38.350 "name": "spare", 00:14:38.350 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:38.350 "is_configured": true, 00:14:38.350 "data_offset": 2048, 00:14:38.350 "data_size": 63488 00:14:38.350 }, 00:14:38.350 { 00:14:38.350 "name": "BaseBdev2", 00:14:38.350 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:38.350 "is_configured": true, 00:14:38.350 "data_offset": 2048, 00:14:38.350 "data_size": 63488 00:14:38.350 } 00:14:38.350 ] 00:14:38.350 }' 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.350 [2024-11-20 14:31:39.383594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.350 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.608 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.608 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.608 "name": "raid_bdev1", 00:14:38.608 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:38.608 "strip_size_kb": 0, 00:14:38.608 "state": "online", 00:14:38.608 "raid_level": "raid1", 00:14:38.608 "superblock": true, 00:14:38.608 "num_base_bdevs": 2, 00:14:38.608 "num_base_bdevs_discovered": 1, 00:14:38.608 "num_base_bdevs_operational": 1, 00:14:38.608 "base_bdevs_list": [ 00:14:38.608 { 00:14:38.608 "name": null, 00:14:38.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.608 "is_configured": false, 00:14:38.608 "data_offset": 0, 00:14:38.608 "data_size": 63488 00:14:38.608 }, 00:14:38.608 { 00:14:38.608 "name": "BaseBdev2", 00:14:38.608 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:38.608 "is_configured": true, 00:14:38.608 "data_offset": 2048, 00:14:38.608 "data_size": 63488 00:14:38.608 } 00:14:38.608 ] 00:14:38.608 }' 00:14:38.608 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.608 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.868 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.868 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.868 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.868 [2024-11-20 14:31:39.900025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.868 [2024-11-20 14:31:39.900360] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:38.868 [2024-11-20 14:31:39.900403] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:38.868 [2024-11-20 14:31:39.900457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.868 [2024-11-20 14:31:39.919411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:38.868 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.868 14:31:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:38.868 [2024-11-20 14:31:39.922368] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.243 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.243 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.243 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.244 "name": "raid_bdev1", 00:14:40.244 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:40.244 "strip_size_kb": 0, 00:14:40.244 "state": "online", 00:14:40.244 "raid_level": "raid1", 00:14:40.244 "superblock": true, 00:14:40.244 "num_base_bdevs": 2, 00:14:40.244 "num_base_bdevs_discovered": 2, 00:14:40.244 "num_base_bdevs_operational": 2, 00:14:40.244 "process": { 00:14:40.244 "type": "rebuild", 00:14:40.244 "target": "spare", 00:14:40.244 "progress": { 00:14:40.244 "blocks": 18432, 00:14:40.244 "percent": 29 00:14:40.244 } 00:14:40.244 }, 00:14:40.244 "base_bdevs_list": [ 00:14:40.244 { 00:14:40.244 "name": "spare", 00:14:40.244 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:40.244 "is_configured": true, 00:14:40.244 "data_offset": 2048, 00:14:40.244 "data_size": 63488 00:14:40.244 }, 00:14:40.244 { 00:14:40.244 "name": "BaseBdev2", 00:14:40.244 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:40.244 "is_configured": true, 00:14:40.244 "data_offset": 2048, 00:14:40.244 "data_size": 63488 00:14:40.244 } 00:14:40.244 ] 00:14:40.244 }' 00:14:40.244 14:31:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 [2024-11-20 14:31:41.108502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.244 [2024-11-20 14:31:41.134343] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.244 [2024-11-20 14:31:41.134456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.244 [2024-11-20 14:31:41.134478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.244 [2024-11-20 14:31:41.134496] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.244 "name": "raid_bdev1", 00:14:40.244 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:40.244 "strip_size_kb": 0, 00:14:40.244 "state": "online", 00:14:40.244 "raid_level": "raid1", 00:14:40.244 "superblock": true, 00:14:40.244 "num_base_bdevs": 2, 00:14:40.244 "num_base_bdevs_discovered": 1, 00:14:40.244 "num_base_bdevs_operational": 1, 00:14:40.244 "base_bdevs_list": [ 00:14:40.244 { 00:14:40.244 "name": null, 00:14:40.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.244 "is_configured": false, 00:14:40.244 "data_offset": 0, 00:14:40.244 "data_size": 63488 00:14:40.244 }, 00:14:40.244 { 00:14:40.244 "name": "BaseBdev2", 00:14:40.244 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:40.244 "is_configured": true, 00:14:40.244 "data_offset": 2048, 00:14:40.244 "data_size": 63488 00:14:40.244 } 00:14:40.244 ] 00:14:40.244 }' 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.244 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.810 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:40.810 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.810 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.810 [2024-11-20 14:31:41.681388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:40.810 [2024-11-20 14:31:41.681521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.810 [2024-11-20 14:31:41.681559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:40.810 [2024-11-20 14:31:41.681579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.810 [2024-11-20 14:31:41.682292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.810 [2024-11-20 14:31:41.682342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:40.810 [2024-11-20 14:31:41.682487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:40.810 [2024-11-20 14:31:41.682514] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:40.810 [2024-11-20 14:31:41.682529] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:40.810 [2024-11-20 14:31:41.682563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.810 [2024-11-20 14:31:41.700677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:40.810 spare 00:14:40.810 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.810 14:31:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:40.810 [2024-11-20 14:31:41.703385] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.745 "name": "raid_bdev1", 00:14:41.745 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:41.745 "strip_size_kb": 0, 00:14:41.745 "state": "online", 00:14:41.745 "raid_level": "raid1", 00:14:41.745 "superblock": true, 00:14:41.745 "num_base_bdevs": 2, 00:14:41.745 "num_base_bdevs_discovered": 2, 00:14:41.745 "num_base_bdevs_operational": 2, 00:14:41.745 "process": { 00:14:41.745 "type": "rebuild", 00:14:41.745 "target": "spare", 00:14:41.745 "progress": { 00:14:41.745 "blocks": 20480, 00:14:41.745 "percent": 32 00:14:41.745 } 00:14:41.745 }, 00:14:41.745 "base_bdevs_list": [ 00:14:41.745 { 00:14:41.745 "name": "spare", 00:14:41.745 "uuid": "c8699594-b644-5c0e-8933-f406a009f680", 00:14:41.745 "is_configured": true, 00:14:41.745 "data_offset": 2048, 00:14:41.745 "data_size": 63488 00:14:41.745 }, 00:14:41.745 { 00:14:41.745 "name": "BaseBdev2", 00:14:41.745 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:41.745 "is_configured": true, 00:14:41.745 "data_offset": 2048, 00:14:41.745 "data_size": 63488 00:14:41.745 } 00:14:41.745 ] 00:14:41.745 }' 00:14:41.745 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.003 [2024-11-20 14:31:42.869677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.003 [2024-11-20 14:31:42.914393] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.003 [2024-11-20 14:31:42.914506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.003 [2024-11-20 14:31:42.914539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.003 [2024-11-20 14:31:42.914552] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.003 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.004 14:31:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.004 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.004 "name": "raid_bdev1", 00:14:42.004 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:42.004 "strip_size_kb": 0, 00:14:42.004 "state": "online", 00:14:42.004 "raid_level": "raid1", 00:14:42.004 "superblock": true, 00:14:42.004 "num_base_bdevs": 2, 00:14:42.004 "num_base_bdevs_discovered": 1, 00:14:42.004 "num_base_bdevs_operational": 1, 00:14:42.004 "base_bdevs_list": [ 00:14:42.004 { 00:14:42.004 "name": null, 00:14:42.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.004 "is_configured": false, 00:14:42.004 "data_offset": 0, 00:14:42.004 "data_size": 63488 00:14:42.004 }, 00:14:42.004 { 00:14:42.004 "name": "BaseBdev2", 00:14:42.004 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:42.004 "is_configured": true, 00:14:42.004 "data_offset": 2048, 00:14:42.004 "data_size": 63488 00:14:42.004 } 00:14:42.004 ] 00:14:42.004 }' 00:14:42.004 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.004 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.569 "name": "raid_bdev1", 00:14:42.569 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:42.569 "strip_size_kb": 0, 00:14:42.569 "state": "online", 00:14:42.569 "raid_level": "raid1", 00:14:42.569 "superblock": true, 00:14:42.569 "num_base_bdevs": 2, 00:14:42.569 "num_base_bdevs_discovered": 1, 00:14:42.569 "num_base_bdevs_operational": 1, 00:14:42.569 "base_bdevs_list": [ 00:14:42.569 { 00:14:42.569 "name": null, 00:14:42.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.569 "is_configured": false, 00:14:42.569 "data_offset": 0, 00:14:42.569 "data_size": 63488 00:14:42.569 }, 00:14:42.569 { 00:14:42.569 "name": "BaseBdev2", 00:14:42.569 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:42.569 "is_configured": true, 00:14:42.569 "data_offset": 2048, 00:14:42.569 "data_size": 63488 00:14:42.569 } 00:14:42.569 ] 00:14:42.569 }' 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:42.569 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.827 [2024-11-20 14:31:43.639357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:42.827 [2024-11-20 14:31:43.639447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.827 [2024-11-20 14:31:43.639490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:42.827 [2024-11-20 14:31:43.639507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.827 [2024-11-20 14:31:43.640118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.827 [2024-11-20 14:31:43.640158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:42.827 [2024-11-20 14:31:43.640268] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:42.827 [2024-11-20 14:31:43.640291] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:42.827 [2024-11-20 14:31:43.640307] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:42.827 [2024-11-20 14:31:43.640323] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:42.827 BaseBdev1 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.827 14:31:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.775 "name": "raid_bdev1", 00:14:43.775 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:43.775 "strip_size_kb": 0, 00:14:43.775 "state": "online", 00:14:43.775 "raid_level": "raid1", 00:14:43.775 "superblock": true, 00:14:43.775 "num_base_bdevs": 2, 00:14:43.775 "num_base_bdevs_discovered": 1, 00:14:43.775 "num_base_bdevs_operational": 1, 00:14:43.775 "base_bdevs_list": [ 00:14:43.775 { 00:14:43.775 "name": null, 00:14:43.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.775 "is_configured": false, 00:14:43.775 "data_offset": 0, 00:14:43.775 "data_size": 63488 00:14:43.775 }, 00:14:43.775 { 00:14:43.775 "name": "BaseBdev2", 00:14:43.775 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:43.775 "is_configured": true, 00:14:43.775 "data_offset": 2048, 00:14:43.775 "data_size": 63488 00:14:43.775 } 00:14:43.775 ] 00:14:43.775 }' 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.775 14:31:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.391 "name": "raid_bdev1", 00:14:44.391 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:44.391 "strip_size_kb": 0, 00:14:44.391 "state": "online", 00:14:44.391 "raid_level": "raid1", 00:14:44.391 "superblock": true, 00:14:44.391 "num_base_bdevs": 2, 00:14:44.391 "num_base_bdevs_discovered": 1, 00:14:44.391 "num_base_bdevs_operational": 1, 00:14:44.391 "base_bdevs_list": [ 00:14:44.391 { 00:14:44.391 "name": null, 00:14:44.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.391 "is_configured": false, 00:14:44.391 "data_offset": 0, 00:14:44.391 "data_size": 63488 00:14:44.391 }, 00:14:44.391 { 00:14:44.391 "name": "BaseBdev2", 00:14:44.391 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:44.391 "is_configured": true, 00:14:44.391 "data_offset": 2048, 00:14:44.391 "data_size": 63488 00:14:44.391 } 00:14:44.391 ] 00:14:44.391 }' 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.391 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.392 [2024-11-20 14:31:45.320256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.392 [2024-11-20 14:31:45.320524] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:44.392 [2024-11-20 14:31:45.320550] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.392 request: 00:14:44.392 { 00:14:44.392 "base_bdev": "BaseBdev1", 00:14:44.392 "raid_bdev": "raid_bdev1", 00:14:44.392 "method": "bdev_raid_add_base_bdev", 00:14:44.392 "req_id": 1 00:14:44.392 } 00:14:44.392 Got JSON-RPC error response 00:14:44.392 response: 00:14:44.392 { 00:14:44.392 "code": -22, 00:14:44.392 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:44.392 } 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.392 14:31:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.327 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.586 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.586 "name": "raid_bdev1", 00:14:45.586 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:45.586 "strip_size_kb": 0, 00:14:45.586 "state": "online", 00:14:45.586 "raid_level": "raid1", 00:14:45.586 "superblock": true, 00:14:45.586 "num_base_bdevs": 2, 00:14:45.586 "num_base_bdevs_discovered": 1, 00:14:45.586 "num_base_bdevs_operational": 1, 00:14:45.586 "base_bdevs_list": [ 00:14:45.586 { 00:14:45.586 "name": null, 00:14:45.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.586 "is_configured": false, 00:14:45.586 "data_offset": 0, 00:14:45.586 "data_size": 63488 00:14:45.586 }, 00:14:45.586 { 00:14:45.586 "name": "BaseBdev2", 00:14:45.586 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:45.586 "is_configured": true, 00:14:45.586 "data_offset": 2048, 00:14:45.586 "data_size": 63488 00:14:45.586 } 00:14:45.586 ] 00:14:45.586 }' 00:14:45.586 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.586 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.844 "name": "raid_bdev1", 00:14:45.844 "uuid": "692996d2-aaef-403b-a378-3ce7599a9938", 00:14:45.844 "strip_size_kb": 0, 00:14:45.844 "state": "online", 00:14:45.844 "raid_level": "raid1", 00:14:45.844 "superblock": true, 00:14:45.844 "num_base_bdevs": 2, 00:14:45.844 "num_base_bdevs_discovered": 1, 00:14:45.844 "num_base_bdevs_operational": 1, 00:14:45.844 "base_bdevs_list": [ 00:14:45.844 { 00:14:45.844 "name": null, 00:14:45.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.844 "is_configured": false, 00:14:45.844 "data_offset": 0, 00:14:45.844 "data_size": 63488 00:14:45.844 }, 00:14:45.844 { 00:14:45.844 "name": "BaseBdev2", 00:14:45.844 "uuid": "94da4f00-b1bc-5a2d-816b-981fbf08dd29", 00:14:45.844 "is_configured": true, 00:14:45.844 "data_offset": 2048, 00:14:45.844 "data_size": 63488 00:14:45.844 } 00:14:45.844 ] 00:14:45.844 }' 00:14:45.844 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.103 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.103 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.103 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.103 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77173 00:14:46.103 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77173 ']' 00:14:46.103 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77173 00:14:46.103 14:31:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:46.103 14:31:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.103 14:31:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77173 00:14:46.103 14:31:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.103 killing process with pid 77173 00:14:46.103 Received shutdown signal, test time was about 18.119456 seconds 00:14:46.103 00:14:46.103 Latency(us) 00:14:46.103 [2024-11-20T14:31:47.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.103 [2024-11-20T14:31:47.160Z] =================================================================================================================== 00:14:46.103 [2024-11-20T14:31:47.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:46.103 14:31:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.103 14:31:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77173' 00:14:46.103 14:31:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77173 00:14:46.103 [2024-11-20 14:31:47.030383] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.103 14:31:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77173 00:14:46.103 [2024-11-20 14:31:47.030589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.103 [2024-11-20 14:31:47.030691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.103 [2024-11-20 14:31:47.030714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:46.360 [2024-11-20 14:31:47.244307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:47.734 00:14:47.734 real 0m21.562s 00:14:47.734 user 0m29.262s 00:14:47.734 sys 0m2.009s 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.734 ************************************ 00:14:47.734 END TEST raid_rebuild_test_sb_io 00:14:47.734 ************************************ 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.734 14:31:48 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:47.734 14:31:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:47.734 14:31:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:47.734 14:31:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.734 14:31:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.734 ************************************ 00:14:47.734 START TEST raid_rebuild_test 00:14:47.734 ************************************ 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:47.734 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77871 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77871 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77871 ']' 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.735 14:31:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.735 [2024-11-20 14:31:48.595641] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:14:47.735 [2024-11-20 14:31:48.595801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77871 ] 00:14:47.735 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:47.735 Zero copy mechanism will not be used. 00:14:47.735 [2024-11-20 14:31:48.771803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.994 [2024-11-20 14:31:48.920769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.254 [2024-11-20 14:31:49.145880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.254 [2024-11-20 14:31:49.145980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.514 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.514 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:48.514 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.514 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:48.514 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.514 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.772 BaseBdev1_malloc 00:14:48.772 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.772 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:48.772 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.772 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.772 [2024-11-20 14:31:49.609017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:48.772 [2024-11-20 14:31:49.609169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.772 [2024-11-20 14:31:49.609208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:48.772 [2024-11-20 14:31:49.609244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.772 [2024-11-20 14:31:49.612388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.772 [2024-11-20 14:31:49.612439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:48.772 BaseBdev1 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 BaseBdev2_malloc 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 [2024-11-20 14:31:49.665548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:48.773 [2024-11-20 14:31:49.665686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.773 [2024-11-20 14:31:49.665727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:48.773 [2024-11-20 14:31:49.665746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.773 [2024-11-20 14:31:49.668805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.773 [2024-11-20 14:31:49.668851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:48.773 BaseBdev2 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 BaseBdev3_malloc 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 [2024-11-20 14:31:49.734802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:48.773 [2024-11-20 14:31:49.734901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.773 [2024-11-20 14:31:49.734935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:48.773 [2024-11-20 14:31:49.734955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.773 [2024-11-20 14:31:49.737948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.773 [2024-11-20 14:31:49.737992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:48.773 BaseBdev3 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 BaseBdev4_malloc 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.773 [2024-11-20 14:31:49.795792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:48.773 [2024-11-20 14:31:49.795898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.773 [2024-11-20 14:31:49.795930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:48.773 [2024-11-20 14:31:49.795948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.773 [2024-11-20 14:31:49.798951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.773 [2024-11-20 14:31:49.799013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:48.773 BaseBdev4 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.773 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.031 spare_malloc 00:14:49.031 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.031 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.032 spare_delay 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.032 [2024-11-20 14:31:49.859002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:49.032 [2024-11-20 14:31:49.859097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.032 [2024-11-20 14:31:49.859124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:49.032 [2024-11-20 14:31:49.859143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.032 [2024-11-20 14:31:49.862139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.032 [2024-11-20 14:31:49.862187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:49.032 spare 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.032 [2024-11-20 14:31:49.871063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.032 [2024-11-20 14:31:49.873663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.032 [2024-11-20 14:31:49.873768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.032 [2024-11-20 14:31:49.873867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.032 [2024-11-20 14:31:49.873993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:49.032 [2024-11-20 14:31:49.874027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:49.032 [2024-11-20 14:31:49.874395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:49.032 [2024-11-20 14:31:49.874605] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:49.032 [2024-11-20 14:31:49.874625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:49.032 [2024-11-20 14:31:49.874863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.032 "name": "raid_bdev1", 00:14:49.032 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:14:49.032 "strip_size_kb": 0, 00:14:49.032 "state": "online", 00:14:49.032 "raid_level": "raid1", 00:14:49.032 "superblock": false, 00:14:49.032 "num_base_bdevs": 4, 00:14:49.032 "num_base_bdevs_discovered": 4, 00:14:49.032 "num_base_bdevs_operational": 4, 00:14:49.032 "base_bdevs_list": [ 00:14:49.032 { 00:14:49.032 "name": "BaseBdev1", 00:14:49.032 "uuid": "bbeb78be-2c00-58dc-8829-c6179f3c79a0", 00:14:49.032 "is_configured": true, 00:14:49.032 "data_offset": 0, 00:14:49.032 "data_size": 65536 00:14:49.032 }, 00:14:49.032 { 00:14:49.032 "name": "BaseBdev2", 00:14:49.032 "uuid": "6e91f3ae-9e22-5555-9f60-0b3f87a2a054", 00:14:49.032 "is_configured": true, 00:14:49.032 "data_offset": 0, 00:14:49.032 "data_size": 65536 00:14:49.032 }, 00:14:49.032 { 00:14:49.032 "name": "BaseBdev3", 00:14:49.032 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:14:49.032 "is_configured": true, 00:14:49.032 "data_offset": 0, 00:14:49.032 "data_size": 65536 00:14:49.032 }, 00:14:49.032 { 00:14:49.032 "name": "BaseBdev4", 00:14:49.032 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:14:49.032 "is_configured": true, 00:14:49.032 "data_offset": 0, 00:14:49.032 "data_size": 65536 00:14:49.032 } 00:14:49.032 ] 00:14:49.032 }' 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.032 14:31:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 [2024-11-20 14:31:50.475713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:49.598 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:49.856 [2024-11-20 14:31:50.859480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:49.856 /dev/nbd0 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.857 1+0 records in 00:14:49.857 1+0 records out 00:14:49.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368752 s, 11.1 MB/s 00:14:49.857 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:50.115 14:31:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:00.085 65536+0 records in 00:15:00.085 65536+0 records out 00:15:00.085 33554432 bytes (34 MB, 32 MiB) copied, 8.55302 s, 3.9 MB/s 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:00.085 [2024-11-20 14:31:59.750253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.085 [2024-11-20 14:31:59.782316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.085 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.086 "name": "raid_bdev1", 00:15:00.086 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:00.086 "strip_size_kb": 0, 00:15:00.086 "state": "online", 00:15:00.086 "raid_level": "raid1", 00:15:00.086 "superblock": false, 00:15:00.086 "num_base_bdevs": 4, 00:15:00.086 "num_base_bdevs_discovered": 3, 00:15:00.086 "num_base_bdevs_operational": 3, 00:15:00.086 "base_bdevs_list": [ 00:15:00.086 { 00:15:00.086 "name": null, 00:15:00.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.086 "is_configured": false, 00:15:00.086 "data_offset": 0, 00:15:00.086 "data_size": 65536 00:15:00.086 }, 00:15:00.086 { 00:15:00.086 "name": "BaseBdev2", 00:15:00.086 "uuid": "6e91f3ae-9e22-5555-9f60-0b3f87a2a054", 00:15:00.086 "is_configured": true, 00:15:00.086 "data_offset": 0, 00:15:00.086 "data_size": 65536 00:15:00.086 }, 00:15:00.086 { 00:15:00.086 "name": "BaseBdev3", 00:15:00.086 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:00.086 "is_configured": true, 00:15:00.086 "data_offset": 0, 00:15:00.086 "data_size": 65536 00:15:00.086 }, 00:15:00.086 { 00:15:00.086 "name": "BaseBdev4", 00:15:00.086 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:00.086 "is_configured": true, 00:15:00.086 "data_offset": 0, 00:15:00.086 "data_size": 65536 00:15:00.086 } 00:15:00.086 ] 00:15:00.086 }' 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.086 14:31:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.086 14:32:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.086 14:32:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.086 14:32:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.086 [2024-11-20 14:32:00.290491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.086 [2024-11-20 14:32:00.305158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:00.086 14:32:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.086 14:32:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:00.086 [2024-11-20 14:32:00.307799] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.344 "name": "raid_bdev1", 00:15:00.344 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:00.344 "strip_size_kb": 0, 00:15:00.344 "state": "online", 00:15:00.344 "raid_level": "raid1", 00:15:00.344 "superblock": false, 00:15:00.344 "num_base_bdevs": 4, 00:15:00.344 "num_base_bdevs_discovered": 4, 00:15:00.344 "num_base_bdevs_operational": 4, 00:15:00.344 "process": { 00:15:00.344 "type": "rebuild", 00:15:00.344 "target": "spare", 00:15:00.344 "progress": { 00:15:00.344 "blocks": 20480, 00:15:00.344 "percent": 31 00:15:00.344 } 00:15:00.344 }, 00:15:00.344 "base_bdevs_list": [ 00:15:00.344 { 00:15:00.344 "name": "spare", 00:15:00.344 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:00.344 "is_configured": true, 00:15:00.344 "data_offset": 0, 00:15:00.344 "data_size": 65536 00:15:00.344 }, 00:15:00.344 { 00:15:00.344 "name": "BaseBdev2", 00:15:00.344 "uuid": "6e91f3ae-9e22-5555-9f60-0b3f87a2a054", 00:15:00.344 "is_configured": true, 00:15:00.344 "data_offset": 0, 00:15:00.344 "data_size": 65536 00:15:00.344 }, 00:15:00.344 { 00:15:00.344 "name": "BaseBdev3", 00:15:00.344 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:00.344 "is_configured": true, 00:15:00.344 "data_offset": 0, 00:15:00.344 "data_size": 65536 00:15:00.344 }, 00:15:00.344 { 00:15:00.344 "name": "BaseBdev4", 00:15:00.344 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:00.344 "is_configured": true, 00:15:00.344 "data_offset": 0, 00:15:00.344 "data_size": 65536 00:15:00.344 } 00:15:00.344 ] 00:15:00.344 }' 00:15:00.344 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.602 [2024-11-20 14:32:01.473298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.602 [2024-11-20 14:32:01.518182] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:00.602 [2024-11-20 14:32:01.518329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.602 [2024-11-20 14:32:01.518358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:00.602 [2024-11-20 14:32:01.518376] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.602 "name": "raid_bdev1", 00:15:00.602 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:00.602 "strip_size_kb": 0, 00:15:00.602 "state": "online", 00:15:00.602 "raid_level": "raid1", 00:15:00.602 "superblock": false, 00:15:00.602 "num_base_bdevs": 4, 00:15:00.602 "num_base_bdevs_discovered": 3, 00:15:00.602 "num_base_bdevs_operational": 3, 00:15:00.602 "base_bdevs_list": [ 00:15:00.602 { 00:15:00.602 "name": null, 00:15:00.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.602 "is_configured": false, 00:15:00.602 "data_offset": 0, 00:15:00.602 "data_size": 65536 00:15:00.602 }, 00:15:00.602 { 00:15:00.602 "name": "BaseBdev2", 00:15:00.602 "uuid": "6e91f3ae-9e22-5555-9f60-0b3f87a2a054", 00:15:00.602 "is_configured": true, 00:15:00.602 "data_offset": 0, 00:15:00.602 "data_size": 65536 00:15:00.602 }, 00:15:00.602 { 00:15:00.602 "name": "BaseBdev3", 00:15:00.602 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:00.602 "is_configured": true, 00:15:00.602 "data_offset": 0, 00:15:00.602 "data_size": 65536 00:15:00.602 }, 00:15:00.602 { 00:15:00.602 "name": "BaseBdev4", 00:15:00.602 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:00.602 "is_configured": true, 00:15:00.602 "data_offset": 0, 00:15:00.602 "data_size": 65536 00:15:00.602 } 00:15:00.602 ] 00:15:00.602 }' 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.602 14:32:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.168 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.168 "name": "raid_bdev1", 00:15:01.168 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:01.168 "strip_size_kb": 0, 00:15:01.168 "state": "online", 00:15:01.168 "raid_level": "raid1", 00:15:01.168 "superblock": false, 00:15:01.168 "num_base_bdevs": 4, 00:15:01.168 "num_base_bdevs_discovered": 3, 00:15:01.168 "num_base_bdevs_operational": 3, 00:15:01.168 "base_bdevs_list": [ 00:15:01.168 { 00:15:01.168 "name": null, 00:15:01.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.168 "is_configured": false, 00:15:01.169 "data_offset": 0, 00:15:01.169 "data_size": 65536 00:15:01.169 }, 00:15:01.169 { 00:15:01.169 "name": "BaseBdev2", 00:15:01.169 "uuid": "6e91f3ae-9e22-5555-9f60-0b3f87a2a054", 00:15:01.169 "is_configured": true, 00:15:01.169 "data_offset": 0, 00:15:01.169 "data_size": 65536 00:15:01.169 }, 00:15:01.169 { 00:15:01.169 "name": "BaseBdev3", 00:15:01.169 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:01.169 "is_configured": true, 00:15:01.169 "data_offset": 0, 00:15:01.169 "data_size": 65536 00:15:01.169 }, 00:15:01.169 { 00:15:01.169 "name": "BaseBdev4", 00:15:01.169 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:01.169 "is_configured": true, 00:15:01.169 "data_offset": 0, 00:15:01.169 "data_size": 65536 00:15:01.169 } 00:15:01.169 ] 00:15:01.169 }' 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.169 [2024-11-20 14:32:02.184180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:01.169 [2024-11-20 14:32:02.197991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.169 14:32:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:01.169 [2024-11-20 14:32:02.200611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.544 "name": "raid_bdev1", 00:15:02.544 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:02.544 "strip_size_kb": 0, 00:15:02.544 "state": "online", 00:15:02.544 "raid_level": "raid1", 00:15:02.544 "superblock": false, 00:15:02.544 "num_base_bdevs": 4, 00:15:02.544 "num_base_bdevs_discovered": 4, 00:15:02.544 "num_base_bdevs_operational": 4, 00:15:02.544 "process": { 00:15:02.544 "type": "rebuild", 00:15:02.544 "target": "spare", 00:15:02.544 "progress": { 00:15:02.544 "blocks": 20480, 00:15:02.544 "percent": 31 00:15:02.544 } 00:15:02.544 }, 00:15:02.544 "base_bdevs_list": [ 00:15:02.544 { 00:15:02.544 "name": "spare", 00:15:02.544 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:02.544 "is_configured": true, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 }, 00:15:02.544 { 00:15:02.544 "name": "BaseBdev2", 00:15:02.544 "uuid": "6e91f3ae-9e22-5555-9f60-0b3f87a2a054", 00:15:02.544 "is_configured": true, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 }, 00:15:02.544 { 00:15:02.544 "name": "BaseBdev3", 00:15:02.544 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:02.544 "is_configured": true, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 }, 00:15:02.544 { 00:15:02.544 "name": "BaseBdev4", 00:15:02.544 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:02.544 "is_configured": true, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 } 00:15:02.544 ] 00:15:02.544 }' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.544 [2024-11-20 14:32:03.359024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.544 [2024-11-20 14:32:03.410728] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.544 "name": "raid_bdev1", 00:15:02.544 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:02.544 "strip_size_kb": 0, 00:15:02.544 "state": "online", 00:15:02.544 "raid_level": "raid1", 00:15:02.544 "superblock": false, 00:15:02.544 "num_base_bdevs": 4, 00:15:02.544 "num_base_bdevs_discovered": 3, 00:15:02.544 "num_base_bdevs_operational": 3, 00:15:02.544 "process": { 00:15:02.544 "type": "rebuild", 00:15:02.544 "target": "spare", 00:15:02.544 "progress": { 00:15:02.544 "blocks": 24576, 00:15:02.544 "percent": 37 00:15:02.544 } 00:15:02.544 }, 00:15:02.544 "base_bdevs_list": [ 00:15:02.544 { 00:15:02.544 "name": "spare", 00:15:02.544 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:02.544 "is_configured": true, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 }, 00:15:02.544 { 00:15:02.544 "name": null, 00:15:02.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.544 "is_configured": false, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 }, 00:15:02.544 { 00:15:02.544 "name": "BaseBdev3", 00:15:02.544 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:02.544 "is_configured": true, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 }, 00:15:02.544 { 00:15:02.544 "name": "BaseBdev4", 00:15:02.544 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:02.544 "is_configured": true, 00:15:02.544 "data_offset": 0, 00:15:02.544 "data_size": 65536 00:15:02.544 } 00:15:02.544 ] 00:15:02.544 }' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=485 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.544 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.545 14:32:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.803 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.803 "name": "raid_bdev1", 00:15:02.803 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:02.803 "strip_size_kb": 0, 00:15:02.803 "state": "online", 00:15:02.803 "raid_level": "raid1", 00:15:02.803 "superblock": false, 00:15:02.803 "num_base_bdevs": 4, 00:15:02.803 "num_base_bdevs_discovered": 3, 00:15:02.803 "num_base_bdevs_operational": 3, 00:15:02.803 "process": { 00:15:02.803 "type": "rebuild", 00:15:02.803 "target": "spare", 00:15:02.803 "progress": { 00:15:02.803 "blocks": 26624, 00:15:02.803 "percent": 40 00:15:02.803 } 00:15:02.803 }, 00:15:02.803 "base_bdevs_list": [ 00:15:02.803 { 00:15:02.803 "name": "spare", 00:15:02.803 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:02.803 "is_configured": true, 00:15:02.803 "data_offset": 0, 00:15:02.803 "data_size": 65536 00:15:02.803 }, 00:15:02.803 { 00:15:02.803 "name": null, 00:15:02.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.803 "is_configured": false, 00:15:02.803 "data_offset": 0, 00:15:02.803 "data_size": 65536 00:15:02.803 }, 00:15:02.803 { 00:15:02.803 "name": "BaseBdev3", 00:15:02.803 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:02.803 "is_configured": true, 00:15:02.803 "data_offset": 0, 00:15:02.803 "data_size": 65536 00:15:02.803 }, 00:15:02.803 { 00:15:02.803 "name": "BaseBdev4", 00:15:02.803 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:02.803 "is_configured": true, 00:15:02.803 "data_offset": 0, 00:15:02.803 "data_size": 65536 00:15:02.803 } 00:15:02.803 ] 00:15:02.803 }' 00:15:02.803 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.803 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.803 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.803 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.803 14:32:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.738 "name": "raid_bdev1", 00:15:03.738 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:03.738 "strip_size_kb": 0, 00:15:03.738 "state": "online", 00:15:03.738 "raid_level": "raid1", 00:15:03.738 "superblock": false, 00:15:03.738 "num_base_bdevs": 4, 00:15:03.738 "num_base_bdevs_discovered": 3, 00:15:03.738 "num_base_bdevs_operational": 3, 00:15:03.738 "process": { 00:15:03.738 "type": "rebuild", 00:15:03.738 "target": "spare", 00:15:03.738 "progress": { 00:15:03.738 "blocks": 51200, 00:15:03.738 "percent": 78 00:15:03.738 } 00:15:03.738 }, 00:15:03.738 "base_bdevs_list": [ 00:15:03.738 { 00:15:03.738 "name": "spare", 00:15:03.738 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:03.738 "is_configured": true, 00:15:03.738 "data_offset": 0, 00:15:03.738 "data_size": 65536 00:15:03.738 }, 00:15:03.738 { 00:15:03.738 "name": null, 00:15:03.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.738 "is_configured": false, 00:15:03.738 "data_offset": 0, 00:15:03.738 "data_size": 65536 00:15:03.738 }, 00:15:03.738 { 00:15:03.738 "name": "BaseBdev3", 00:15:03.738 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:03.738 "is_configured": true, 00:15:03.738 "data_offset": 0, 00:15:03.738 "data_size": 65536 00:15:03.738 }, 00:15:03.738 { 00:15:03.738 "name": "BaseBdev4", 00:15:03.738 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:03.738 "is_configured": true, 00:15:03.738 "data_offset": 0, 00:15:03.738 "data_size": 65536 00:15:03.738 } 00:15:03.738 ] 00:15:03.738 }' 00:15:03.738 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.996 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.996 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.996 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.996 14:32:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.563 [2024-11-20 14:32:05.427150] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:04.563 [2024-11-20 14:32:05.427269] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:04.563 [2024-11-20 14:32:05.427346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.131 "name": "raid_bdev1", 00:15:05.131 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:05.131 "strip_size_kb": 0, 00:15:05.131 "state": "online", 00:15:05.131 "raid_level": "raid1", 00:15:05.131 "superblock": false, 00:15:05.131 "num_base_bdevs": 4, 00:15:05.131 "num_base_bdevs_discovered": 3, 00:15:05.131 "num_base_bdevs_operational": 3, 00:15:05.131 "base_bdevs_list": [ 00:15:05.131 { 00:15:05.131 "name": "spare", 00:15:05.131 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:05.131 "is_configured": true, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 }, 00:15:05.131 { 00:15:05.131 "name": null, 00:15:05.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.131 "is_configured": false, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 }, 00:15:05.131 { 00:15:05.131 "name": "BaseBdev3", 00:15:05.131 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:05.131 "is_configured": true, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 }, 00:15:05.131 { 00:15:05.131 "name": "BaseBdev4", 00:15:05.131 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:05.131 "is_configured": true, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 } 00:15:05.131 ] 00:15:05.131 }' 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:05.131 14:32:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.131 "name": "raid_bdev1", 00:15:05.131 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:05.131 "strip_size_kb": 0, 00:15:05.131 "state": "online", 00:15:05.131 "raid_level": "raid1", 00:15:05.131 "superblock": false, 00:15:05.131 "num_base_bdevs": 4, 00:15:05.131 "num_base_bdevs_discovered": 3, 00:15:05.131 "num_base_bdevs_operational": 3, 00:15:05.131 "base_bdevs_list": [ 00:15:05.131 { 00:15:05.131 "name": "spare", 00:15:05.131 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:05.131 "is_configured": true, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 }, 00:15:05.131 { 00:15:05.131 "name": null, 00:15:05.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.131 "is_configured": false, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 }, 00:15:05.131 { 00:15:05.131 "name": "BaseBdev3", 00:15:05.131 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:05.131 "is_configured": true, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 }, 00:15:05.131 { 00:15:05.131 "name": "BaseBdev4", 00:15:05.131 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:05.131 "is_configured": true, 00:15:05.131 "data_offset": 0, 00:15:05.131 "data_size": 65536 00:15:05.131 } 00:15:05.131 ] 00:15:05.131 }' 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.131 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.390 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.390 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.390 "name": "raid_bdev1", 00:15:05.390 "uuid": "2b27b491-d7e3-47e8-bd68-d092c8c48b43", 00:15:05.390 "strip_size_kb": 0, 00:15:05.390 "state": "online", 00:15:05.390 "raid_level": "raid1", 00:15:05.390 "superblock": false, 00:15:05.390 "num_base_bdevs": 4, 00:15:05.390 "num_base_bdevs_discovered": 3, 00:15:05.390 "num_base_bdevs_operational": 3, 00:15:05.390 "base_bdevs_list": [ 00:15:05.390 { 00:15:05.390 "name": "spare", 00:15:05.390 "uuid": "e9fa5876-31fa-51d0-be0f-f41a347e5da0", 00:15:05.390 "is_configured": true, 00:15:05.390 "data_offset": 0, 00:15:05.390 "data_size": 65536 00:15:05.390 }, 00:15:05.390 { 00:15:05.390 "name": null, 00:15:05.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.390 "is_configured": false, 00:15:05.390 "data_offset": 0, 00:15:05.390 "data_size": 65536 00:15:05.390 }, 00:15:05.390 { 00:15:05.390 "name": "BaseBdev3", 00:15:05.390 "uuid": "7f9f13bf-d95c-5ec0-817b-2c83a37f68fc", 00:15:05.390 "is_configured": true, 00:15:05.390 "data_offset": 0, 00:15:05.390 "data_size": 65536 00:15:05.390 }, 00:15:05.390 { 00:15:05.390 "name": "BaseBdev4", 00:15:05.390 "uuid": "18f727b9-36a1-5b95-bc8b-c6b5a6e512db", 00:15:05.390 "is_configured": true, 00:15:05.390 "data_offset": 0, 00:15:05.390 "data_size": 65536 00:15:05.390 } 00:15:05.390 ] 00:15:05.390 }' 00:15:05.390 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.390 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.649 [2024-11-20 14:32:06.679301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.649 [2024-11-20 14:32:06.679366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.649 [2024-11-20 14:32:06.679479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.649 [2024-11-20 14:32:06.679593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.649 [2024-11-20 14:32:06.679611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.649 14:32:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.911 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:05.912 14:32:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:06.173 /dev/nbd0 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.173 1+0 records in 00:15:06.173 1+0 records out 00:15:06.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355919 s, 11.5 MB/s 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.173 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:06.431 /dev/nbd1 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:06.431 1+0 records in 00:15:06.431 1+0 records out 00:15:06.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409446 s, 10.0 MB/s 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:06.431 14:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:06.689 14:32:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:06.689 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:06.689 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:06.689 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.689 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:06.689 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.689 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.947 14:32:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77871 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77871 ']' 00:15:07.205 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77871 00:15:07.206 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:07.206 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.206 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77871 00:15:07.464 killing process with pid 77871 00:15:07.464 Received shutdown signal, test time was about 60.000000 seconds 00:15:07.464 00:15:07.464 Latency(us) 00:15:07.464 [2024-11-20T14:32:08.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.464 [2024-11-20T14:32:08.521Z] =================================================================================================================== 00:15:07.464 [2024-11-20T14:32:08.521Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.464 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.464 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.464 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77871' 00:15:07.464 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77871 00:15:07.464 14:32:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77871 00:15:07.464 [2024-11-20 14:32:08.274173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.722 [2024-11-20 14:32:08.721620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.099 00:15:09.099 real 0m21.310s 00:15:09.099 user 0m23.438s 00:15:09.099 sys 0m3.672s 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.099 ************************************ 00:15:09.099 END TEST raid_rebuild_test 00:15:09.099 ************************************ 00:15:09.099 14:32:09 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:09.099 14:32:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.099 14:32:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.099 14:32:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.099 ************************************ 00:15:09.099 START TEST raid_rebuild_test_sb 00:15:09.099 ************************************ 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.099 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78359 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78359 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78359 ']' 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.100 14:32:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.100 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.100 Zero copy mechanism will not be used. 00:15:09.100 [2024-11-20 14:32:09.956337] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:15:09.100 [2024-11-20 14:32:09.956496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78359 ] 00:15:09.100 [2024-11-20 14:32:10.129648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.358 [2024-11-20 14:32:10.261161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.616 [2024-11-20 14:32:10.466243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.616 [2024-11-20 14:32:10.466521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.182 BaseBdev1_malloc 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.182 [2024-11-20 14:32:10.985413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.182 [2024-11-20 14:32:10.985493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.182 [2024-11-20 14:32:10.985527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.182 [2024-11-20 14:32:10.985547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.182 [2024-11-20 14:32:10.988335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.182 [2024-11-20 14:32:10.988530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.182 BaseBdev1 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.182 BaseBdev2_malloc 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.182 [2024-11-20 14:32:11.038682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.182 [2024-11-20 14:32:11.038770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.182 [2024-11-20 14:32:11.038805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.182 [2024-11-20 14:32:11.038824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.182 [2024-11-20 14:32:11.041581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.182 [2024-11-20 14:32:11.041646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.182 BaseBdev2 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.182 BaseBdev3_malloc 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.182 [2024-11-20 14:32:11.110218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:10.182 [2024-11-20 14:32:11.110294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.182 [2024-11-20 14:32:11.110329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.182 [2024-11-20 14:32:11.110349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.182 [2024-11-20 14:32:11.113127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.182 [2024-11-20 14:32:11.113178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:10.182 BaseBdev3 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.182 BaseBdev4_malloc 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.182 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.183 [2024-11-20 14:32:11.166912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:10.183 [2024-11-20 14:32:11.166993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.183 [2024-11-20 14:32:11.167025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:10.183 [2024-11-20 14:32:11.167056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.183 [2024-11-20 14:32:11.169858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.183 [2024-11-20 14:32:11.169911] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:10.183 BaseBdev4 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.183 spare_malloc 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.183 spare_delay 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.183 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.183 [2024-11-20 14:32:11.232842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:10.183 [2024-11-20 14:32:11.232915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.183 [2024-11-20 14:32:11.232943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:10.183 [2024-11-20 14:32:11.232961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.183 [2024-11-20 14:32:11.235897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.183 [2024-11-20 14:32:11.236072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:10.441 spare 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.441 [2024-11-20 14:32:11.244982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.441 [2024-11-20 14:32:11.247683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.441 [2024-11-20 14:32:11.247898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.441 [2024-11-20 14:32:11.248029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.441 [2024-11-20 14:32:11.248398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.441 [2024-11-20 14:32:11.248475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:10.441 [2024-11-20 14:32:11.248903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:10.441 [2024-11-20 14:32:11.249263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.441 [2024-11-20 14:32:11.249385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.441 [2024-11-20 14:32:11.249827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.441 "name": "raid_bdev1", 00:15:10.441 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:10.441 "strip_size_kb": 0, 00:15:10.441 "state": "online", 00:15:10.441 "raid_level": "raid1", 00:15:10.441 "superblock": true, 00:15:10.441 "num_base_bdevs": 4, 00:15:10.441 "num_base_bdevs_discovered": 4, 00:15:10.441 "num_base_bdevs_operational": 4, 00:15:10.441 "base_bdevs_list": [ 00:15:10.441 { 00:15:10.441 "name": "BaseBdev1", 00:15:10.441 "uuid": "80595edc-768e-509b-96c9-a584a9fd141e", 00:15:10.441 "is_configured": true, 00:15:10.441 "data_offset": 2048, 00:15:10.441 "data_size": 63488 00:15:10.441 }, 00:15:10.441 { 00:15:10.441 "name": "BaseBdev2", 00:15:10.441 "uuid": "01ce5e5c-56e7-5822-87d7-a2e260a320bd", 00:15:10.441 "is_configured": true, 00:15:10.441 "data_offset": 2048, 00:15:10.441 "data_size": 63488 00:15:10.441 }, 00:15:10.441 { 00:15:10.441 "name": "BaseBdev3", 00:15:10.441 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:10.441 "is_configured": true, 00:15:10.441 "data_offset": 2048, 00:15:10.441 "data_size": 63488 00:15:10.441 }, 00:15:10.441 { 00:15:10.441 "name": "BaseBdev4", 00:15:10.441 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:10.441 "is_configured": true, 00:15:10.441 "data_offset": 2048, 00:15:10.441 "data_size": 63488 00:15:10.441 } 00:15:10.441 ] 00:15:10.441 }' 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.441 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.008 [2024-11-20 14:32:11.778373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.008 14:32:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:11.266 [2024-11-20 14:32:12.218129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:11.266 /dev/nbd0 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.266 1+0 records in 00:15:11.266 1+0 records out 00:15:11.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422588 s, 9.7 MB/s 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:11.266 14:32:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:21.236 63488+0 records in 00:15:21.236 63488+0 records out 00:15:21.236 32505856 bytes (33 MB, 31 MiB) copied, 8.473 s, 3.8 MB/s 00:15:21.236 14:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:21.236 14:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:21.236 14:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:21.236 14:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.236 14:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:21.236 14:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.236 14:32:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:21.236 [2024-11-20 14:32:21.031934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.236 [2024-11-20 14:32:21.064005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.236 "name": "raid_bdev1", 00:15:21.236 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:21.236 "strip_size_kb": 0, 00:15:21.236 "state": "online", 00:15:21.236 "raid_level": "raid1", 00:15:21.236 "superblock": true, 00:15:21.236 "num_base_bdevs": 4, 00:15:21.236 "num_base_bdevs_discovered": 3, 00:15:21.236 "num_base_bdevs_operational": 3, 00:15:21.236 "base_bdevs_list": [ 00:15:21.236 { 00:15:21.236 "name": null, 00:15:21.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.236 "is_configured": false, 00:15:21.236 "data_offset": 0, 00:15:21.236 "data_size": 63488 00:15:21.236 }, 00:15:21.236 { 00:15:21.236 "name": "BaseBdev2", 00:15:21.236 "uuid": "01ce5e5c-56e7-5822-87d7-a2e260a320bd", 00:15:21.236 "is_configured": true, 00:15:21.236 "data_offset": 2048, 00:15:21.236 "data_size": 63488 00:15:21.236 }, 00:15:21.236 { 00:15:21.236 "name": "BaseBdev3", 00:15:21.236 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:21.236 "is_configured": true, 00:15:21.236 "data_offset": 2048, 00:15:21.236 "data_size": 63488 00:15:21.236 }, 00:15:21.236 { 00:15:21.236 "name": "BaseBdev4", 00:15:21.236 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:21.236 "is_configured": true, 00:15:21.236 "data_offset": 2048, 00:15:21.236 "data_size": 63488 00:15:21.236 } 00:15:21.236 ] 00:15:21.236 }' 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.236 [2024-11-20 14:32:21.572271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.236 [2024-11-20 14:32:21.587242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.236 14:32:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:21.236 [2024-11-20 14:32:21.590246] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.803 "name": "raid_bdev1", 00:15:21.803 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:21.803 "strip_size_kb": 0, 00:15:21.803 "state": "online", 00:15:21.803 "raid_level": "raid1", 00:15:21.803 "superblock": true, 00:15:21.803 "num_base_bdevs": 4, 00:15:21.803 "num_base_bdevs_discovered": 4, 00:15:21.803 "num_base_bdevs_operational": 4, 00:15:21.803 "process": { 00:15:21.803 "type": "rebuild", 00:15:21.803 "target": "spare", 00:15:21.803 "progress": { 00:15:21.803 "blocks": 20480, 00:15:21.803 "percent": 32 00:15:21.803 } 00:15:21.803 }, 00:15:21.803 "base_bdevs_list": [ 00:15:21.803 { 00:15:21.803 "name": "spare", 00:15:21.803 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:21.803 "is_configured": true, 00:15:21.803 "data_offset": 2048, 00:15:21.803 "data_size": 63488 00:15:21.803 }, 00:15:21.803 { 00:15:21.803 "name": "BaseBdev2", 00:15:21.803 "uuid": "01ce5e5c-56e7-5822-87d7-a2e260a320bd", 00:15:21.803 "is_configured": true, 00:15:21.803 "data_offset": 2048, 00:15:21.803 "data_size": 63488 00:15:21.803 }, 00:15:21.803 { 00:15:21.803 "name": "BaseBdev3", 00:15:21.803 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:21.803 "is_configured": true, 00:15:21.803 "data_offset": 2048, 00:15:21.803 "data_size": 63488 00:15:21.803 }, 00:15:21.803 { 00:15:21.803 "name": "BaseBdev4", 00:15:21.803 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:21.803 "is_configured": true, 00:15:21.803 "data_offset": 2048, 00:15:21.803 "data_size": 63488 00:15:21.803 } 00:15:21.803 ] 00:15:21.803 }' 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.803 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.804 [2024-11-20 14:32:22.759981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.804 [2024-11-20 14:32:22.801560] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:21.804 [2024-11-20 14:32:22.802067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.804 [2024-11-20 14:32:22.802234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.804 [2024-11-20 14:32:22.802294] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.804 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.062 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.062 "name": "raid_bdev1", 00:15:22.062 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:22.062 "strip_size_kb": 0, 00:15:22.062 "state": "online", 00:15:22.062 "raid_level": "raid1", 00:15:22.062 "superblock": true, 00:15:22.062 "num_base_bdevs": 4, 00:15:22.062 "num_base_bdevs_discovered": 3, 00:15:22.062 "num_base_bdevs_operational": 3, 00:15:22.062 "base_bdevs_list": [ 00:15:22.062 { 00:15:22.062 "name": null, 00:15:22.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.062 "is_configured": false, 00:15:22.062 "data_offset": 0, 00:15:22.062 "data_size": 63488 00:15:22.062 }, 00:15:22.062 { 00:15:22.062 "name": "BaseBdev2", 00:15:22.062 "uuid": "01ce5e5c-56e7-5822-87d7-a2e260a320bd", 00:15:22.062 "is_configured": true, 00:15:22.062 "data_offset": 2048, 00:15:22.062 "data_size": 63488 00:15:22.062 }, 00:15:22.062 { 00:15:22.062 "name": "BaseBdev3", 00:15:22.062 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:22.062 "is_configured": true, 00:15:22.062 "data_offset": 2048, 00:15:22.062 "data_size": 63488 00:15:22.062 }, 00:15:22.062 { 00:15:22.062 "name": "BaseBdev4", 00:15:22.062 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:22.062 "is_configured": true, 00:15:22.062 "data_offset": 2048, 00:15:22.062 "data_size": 63488 00:15:22.062 } 00:15:22.062 ] 00:15:22.062 }' 00:15:22.062 14:32:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.062 14:32:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.319 "name": "raid_bdev1", 00:15:22.319 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:22.319 "strip_size_kb": 0, 00:15:22.319 "state": "online", 00:15:22.319 "raid_level": "raid1", 00:15:22.319 "superblock": true, 00:15:22.319 "num_base_bdevs": 4, 00:15:22.319 "num_base_bdevs_discovered": 3, 00:15:22.319 "num_base_bdevs_operational": 3, 00:15:22.319 "base_bdevs_list": [ 00:15:22.319 { 00:15:22.319 "name": null, 00:15:22.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.319 "is_configured": false, 00:15:22.319 "data_offset": 0, 00:15:22.319 "data_size": 63488 00:15:22.319 }, 00:15:22.319 { 00:15:22.319 "name": "BaseBdev2", 00:15:22.319 "uuid": "01ce5e5c-56e7-5822-87d7-a2e260a320bd", 00:15:22.319 "is_configured": true, 00:15:22.319 "data_offset": 2048, 00:15:22.319 "data_size": 63488 00:15:22.319 }, 00:15:22.319 { 00:15:22.319 "name": "BaseBdev3", 00:15:22.319 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:22.319 "is_configured": true, 00:15:22.319 "data_offset": 2048, 00:15:22.319 "data_size": 63488 00:15:22.319 }, 00:15:22.319 { 00:15:22.319 "name": "BaseBdev4", 00:15:22.319 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:22.319 "is_configured": true, 00:15:22.319 "data_offset": 2048, 00:15:22.319 "data_size": 63488 00:15:22.319 } 00:15:22.319 ] 00:15:22.319 }' 00:15:22.319 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.577 [2024-11-20 14:32:23.463823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.577 [2024-11-20 14:32:23.477639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.577 14:32:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:22.577 [2024-11-20 14:32:23.480336] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.537 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.537 "name": "raid_bdev1", 00:15:23.537 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:23.537 "strip_size_kb": 0, 00:15:23.537 "state": "online", 00:15:23.537 "raid_level": "raid1", 00:15:23.537 "superblock": true, 00:15:23.537 "num_base_bdevs": 4, 00:15:23.537 "num_base_bdevs_discovered": 4, 00:15:23.537 "num_base_bdevs_operational": 4, 00:15:23.537 "process": { 00:15:23.537 "type": "rebuild", 00:15:23.537 "target": "spare", 00:15:23.537 "progress": { 00:15:23.537 "blocks": 20480, 00:15:23.537 "percent": 32 00:15:23.537 } 00:15:23.537 }, 00:15:23.537 "base_bdevs_list": [ 00:15:23.537 { 00:15:23.537 "name": "spare", 00:15:23.537 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:23.537 "is_configured": true, 00:15:23.537 "data_offset": 2048, 00:15:23.537 "data_size": 63488 00:15:23.537 }, 00:15:23.537 { 00:15:23.537 "name": "BaseBdev2", 00:15:23.537 "uuid": "01ce5e5c-56e7-5822-87d7-a2e260a320bd", 00:15:23.537 "is_configured": true, 00:15:23.537 "data_offset": 2048, 00:15:23.537 "data_size": 63488 00:15:23.537 }, 00:15:23.537 { 00:15:23.537 "name": "BaseBdev3", 00:15:23.537 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:23.537 "is_configured": true, 00:15:23.537 "data_offset": 2048, 00:15:23.537 "data_size": 63488 00:15:23.538 }, 00:15:23.538 { 00:15:23.538 "name": "BaseBdev4", 00:15:23.538 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:23.538 "is_configured": true, 00:15:23.538 "data_offset": 2048, 00:15:23.538 "data_size": 63488 00:15:23.538 } 00:15:23.538 ] 00:15:23.538 }' 00:15:23.538 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.538 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:23.796 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.796 [2024-11-20 14:32:24.649459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:23.796 [2024-11-20 14:32:24.791066] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.796 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.055 "name": "raid_bdev1", 00:15:24.055 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:24.055 "strip_size_kb": 0, 00:15:24.055 "state": "online", 00:15:24.055 "raid_level": "raid1", 00:15:24.055 "superblock": true, 00:15:24.055 "num_base_bdevs": 4, 00:15:24.055 "num_base_bdevs_discovered": 3, 00:15:24.055 "num_base_bdevs_operational": 3, 00:15:24.055 "process": { 00:15:24.055 "type": "rebuild", 00:15:24.055 "target": "spare", 00:15:24.055 "progress": { 00:15:24.055 "blocks": 24576, 00:15:24.055 "percent": 38 00:15:24.055 } 00:15:24.055 }, 00:15:24.055 "base_bdevs_list": [ 00:15:24.055 { 00:15:24.055 "name": "spare", 00:15:24.055 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:24.055 "is_configured": true, 00:15:24.055 "data_offset": 2048, 00:15:24.055 "data_size": 63488 00:15:24.055 }, 00:15:24.055 { 00:15:24.055 "name": null, 00:15:24.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.055 "is_configured": false, 00:15:24.055 "data_offset": 0, 00:15:24.055 "data_size": 63488 00:15:24.055 }, 00:15:24.055 { 00:15:24.055 "name": "BaseBdev3", 00:15:24.055 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:24.055 "is_configured": true, 00:15:24.055 "data_offset": 2048, 00:15:24.055 "data_size": 63488 00:15:24.055 }, 00:15:24.055 { 00:15:24.055 "name": "BaseBdev4", 00:15:24.055 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:24.055 "is_configured": true, 00:15:24.055 "data_offset": 2048, 00:15:24.055 "data_size": 63488 00:15:24.055 } 00:15:24.055 ] 00:15:24.055 }' 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=506 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.055 "name": "raid_bdev1", 00:15:24.055 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:24.055 "strip_size_kb": 0, 00:15:24.055 "state": "online", 00:15:24.055 "raid_level": "raid1", 00:15:24.055 "superblock": true, 00:15:24.055 "num_base_bdevs": 4, 00:15:24.055 "num_base_bdevs_discovered": 3, 00:15:24.055 "num_base_bdevs_operational": 3, 00:15:24.055 "process": { 00:15:24.055 "type": "rebuild", 00:15:24.055 "target": "spare", 00:15:24.055 "progress": { 00:15:24.055 "blocks": 26624, 00:15:24.055 "percent": 41 00:15:24.055 } 00:15:24.055 }, 00:15:24.055 "base_bdevs_list": [ 00:15:24.055 { 00:15:24.055 "name": "spare", 00:15:24.055 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:24.055 "is_configured": true, 00:15:24.055 "data_offset": 2048, 00:15:24.055 "data_size": 63488 00:15:24.055 }, 00:15:24.055 { 00:15:24.055 "name": null, 00:15:24.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.055 "is_configured": false, 00:15:24.055 "data_offset": 0, 00:15:24.055 "data_size": 63488 00:15:24.055 }, 00:15:24.055 { 00:15:24.055 "name": "BaseBdev3", 00:15:24.055 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:24.055 "is_configured": true, 00:15:24.055 "data_offset": 2048, 00:15:24.055 "data_size": 63488 00:15:24.055 }, 00:15:24.055 { 00:15:24.055 "name": "BaseBdev4", 00:15:24.055 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:24.055 "is_configured": true, 00:15:24.055 "data_offset": 2048, 00:15:24.055 "data_size": 63488 00:15:24.055 } 00:15:24.055 ] 00:15:24.055 }' 00:15:24.055 14:32:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.055 14:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.055 14:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.055 14:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.055 14:32:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.429 "name": "raid_bdev1", 00:15:25.429 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:25.429 "strip_size_kb": 0, 00:15:25.429 "state": "online", 00:15:25.429 "raid_level": "raid1", 00:15:25.429 "superblock": true, 00:15:25.429 "num_base_bdevs": 4, 00:15:25.429 "num_base_bdevs_discovered": 3, 00:15:25.429 "num_base_bdevs_operational": 3, 00:15:25.429 "process": { 00:15:25.429 "type": "rebuild", 00:15:25.429 "target": "spare", 00:15:25.429 "progress": { 00:15:25.429 "blocks": 49152, 00:15:25.429 "percent": 77 00:15:25.429 } 00:15:25.429 }, 00:15:25.429 "base_bdevs_list": [ 00:15:25.429 { 00:15:25.429 "name": "spare", 00:15:25.429 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:25.429 "is_configured": true, 00:15:25.429 "data_offset": 2048, 00:15:25.429 "data_size": 63488 00:15:25.429 }, 00:15:25.429 { 00:15:25.429 "name": null, 00:15:25.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.429 "is_configured": false, 00:15:25.429 "data_offset": 0, 00:15:25.429 "data_size": 63488 00:15:25.429 }, 00:15:25.429 { 00:15:25.429 "name": "BaseBdev3", 00:15:25.429 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:25.429 "is_configured": true, 00:15:25.429 "data_offset": 2048, 00:15:25.429 "data_size": 63488 00:15:25.429 }, 00:15:25.429 { 00:15:25.429 "name": "BaseBdev4", 00:15:25.429 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:25.429 "is_configured": true, 00:15:25.429 "data_offset": 2048, 00:15:25.429 "data_size": 63488 00:15:25.429 } 00:15:25.429 ] 00:15:25.429 }' 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.429 14:32:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.778 [2024-11-20 14:32:26.706901] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:25.778 [2024-11-20 14:32:26.707026] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:25.778 [2024-11-20 14:32:26.707217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.345 "name": "raid_bdev1", 00:15:26.345 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:26.345 "strip_size_kb": 0, 00:15:26.345 "state": "online", 00:15:26.345 "raid_level": "raid1", 00:15:26.345 "superblock": true, 00:15:26.345 "num_base_bdevs": 4, 00:15:26.345 "num_base_bdevs_discovered": 3, 00:15:26.345 "num_base_bdevs_operational": 3, 00:15:26.345 "base_bdevs_list": [ 00:15:26.345 { 00:15:26.345 "name": "spare", 00:15:26.345 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:26.345 "is_configured": true, 00:15:26.345 "data_offset": 2048, 00:15:26.345 "data_size": 63488 00:15:26.345 }, 00:15:26.345 { 00:15:26.345 "name": null, 00:15:26.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.345 "is_configured": false, 00:15:26.345 "data_offset": 0, 00:15:26.345 "data_size": 63488 00:15:26.345 }, 00:15:26.345 { 00:15:26.345 "name": "BaseBdev3", 00:15:26.345 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:26.345 "is_configured": true, 00:15:26.345 "data_offset": 2048, 00:15:26.345 "data_size": 63488 00:15:26.345 }, 00:15:26.345 { 00:15:26.345 "name": "BaseBdev4", 00:15:26.345 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:26.345 "is_configured": true, 00:15:26.345 "data_offset": 2048, 00:15:26.345 "data_size": 63488 00:15:26.345 } 00:15:26.345 ] 00:15:26.345 }' 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.345 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.603 "name": "raid_bdev1", 00:15:26.603 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:26.603 "strip_size_kb": 0, 00:15:26.603 "state": "online", 00:15:26.603 "raid_level": "raid1", 00:15:26.603 "superblock": true, 00:15:26.603 "num_base_bdevs": 4, 00:15:26.603 "num_base_bdevs_discovered": 3, 00:15:26.603 "num_base_bdevs_operational": 3, 00:15:26.603 "base_bdevs_list": [ 00:15:26.603 { 00:15:26.603 "name": "spare", 00:15:26.603 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:26.603 "is_configured": true, 00:15:26.603 "data_offset": 2048, 00:15:26.603 "data_size": 63488 00:15:26.603 }, 00:15:26.603 { 00:15:26.603 "name": null, 00:15:26.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.603 "is_configured": false, 00:15:26.603 "data_offset": 0, 00:15:26.603 "data_size": 63488 00:15:26.603 }, 00:15:26.603 { 00:15:26.603 "name": "BaseBdev3", 00:15:26.603 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:26.603 "is_configured": true, 00:15:26.603 "data_offset": 2048, 00:15:26.603 "data_size": 63488 00:15:26.603 }, 00:15:26.603 { 00:15:26.603 "name": "BaseBdev4", 00:15:26.603 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:26.603 "is_configured": true, 00:15:26.603 "data_offset": 2048, 00:15:26.603 "data_size": 63488 00:15:26.603 } 00:15:26.603 ] 00:15:26.603 }' 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.603 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.603 "name": "raid_bdev1", 00:15:26.603 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:26.603 "strip_size_kb": 0, 00:15:26.603 "state": "online", 00:15:26.603 "raid_level": "raid1", 00:15:26.604 "superblock": true, 00:15:26.604 "num_base_bdevs": 4, 00:15:26.604 "num_base_bdevs_discovered": 3, 00:15:26.604 "num_base_bdevs_operational": 3, 00:15:26.604 "base_bdevs_list": [ 00:15:26.604 { 00:15:26.604 "name": "spare", 00:15:26.604 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:26.604 "is_configured": true, 00:15:26.604 "data_offset": 2048, 00:15:26.604 "data_size": 63488 00:15:26.604 }, 00:15:26.604 { 00:15:26.604 "name": null, 00:15:26.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.604 "is_configured": false, 00:15:26.604 "data_offset": 0, 00:15:26.604 "data_size": 63488 00:15:26.604 }, 00:15:26.604 { 00:15:26.604 "name": "BaseBdev3", 00:15:26.604 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:26.604 "is_configured": true, 00:15:26.604 "data_offset": 2048, 00:15:26.604 "data_size": 63488 00:15:26.604 }, 00:15:26.604 { 00:15:26.604 "name": "BaseBdev4", 00:15:26.604 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:26.604 "is_configured": true, 00:15:26.604 "data_offset": 2048, 00:15:26.604 "data_size": 63488 00:15:26.604 } 00:15:26.604 ] 00:15:26.604 }' 00:15:26.604 14:32:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.604 14:32:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 [2024-11-20 14:32:28.044445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.168 [2024-11-20 14:32:28.044651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.168 [2024-11-20 14:32:28.044901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.168 [2024-11-20 14:32:28.045152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.168 [2024-11-20 14:32:28.045319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.168 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:27.425 /dev/nbd0 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.425 1+0 records in 00:15:27.425 1+0 records out 00:15:27.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646589 s, 6.3 MB/s 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.425 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:27.684 /dev/nbd1 00:15:27.684 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:27.684 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:27.684 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:27.684 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:27.684 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.684 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.684 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.942 1+0 records in 00:15:27.942 1+0 records out 00:15:27.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339736 s, 12.1 MB/s 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.942 14:32:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.507 [2024-11-20 14:32:29.549398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:28.507 [2024-11-20 14:32:29.549465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.507 [2024-11-20 14:32:29.549501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:28.507 [2024-11-20 14:32:29.549517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.507 [2024-11-20 14:32:29.552614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.507 [2024-11-20 14:32:29.552702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:28.507 [2024-11-20 14:32:29.552825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:28.507 [2024-11-20 14:32:29.552894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.507 [2024-11-20 14:32:29.553077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.507 [2024-11-20 14:32:29.553218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.507 spare 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.507 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.764 [2024-11-20 14:32:29.653350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:28.764 [2024-11-20 14:32:29.653385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:28.764 [2024-11-20 14:32:29.653795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:28.764 [2024-11-20 14:32:29.654040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:28.764 [2024-11-20 14:32:29.654062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:28.764 [2024-11-20 14:32:29.654314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.764 "name": "raid_bdev1", 00:15:28.764 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:28.764 "strip_size_kb": 0, 00:15:28.764 "state": "online", 00:15:28.764 "raid_level": "raid1", 00:15:28.764 "superblock": true, 00:15:28.764 "num_base_bdevs": 4, 00:15:28.764 "num_base_bdevs_discovered": 3, 00:15:28.764 "num_base_bdevs_operational": 3, 00:15:28.764 "base_bdevs_list": [ 00:15:28.764 { 00:15:28.764 "name": "spare", 00:15:28.764 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:28.764 "is_configured": true, 00:15:28.764 "data_offset": 2048, 00:15:28.764 "data_size": 63488 00:15:28.764 }, 00:15:28.764 { 00:15:28.764 "name": null, 00:15:28.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.764 "is_configured": false, 00:15:28.764 "data_offset": 2048, 00:15:28.764 "data_size": 63488 00:15:28.764 }, 00:15:28.764 { 00:15:28.764 "name": "BaseBdev3", 00:15:28.764 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:28.764 "is_configured": true, 00:15:28.764 "data_offset": 2048, 00:15:28.764 "data_size": 63488 00:15:28.764 }, 00:15:28.764 { 00:15:28.764 "name": "BaseBdev4", 00:15:28.764 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:28.764 "is_configured": true, 00:15:28.764 "data_offset": 2048, 00:15:28.764 "data_size": 63488 00:15:28.764 } 00:15:28.764 ] 00:15:28.764 }' 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.764 14:32:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.329 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.329 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.329 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.329 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.329 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.329 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.330 "name": "raid_bdev1", 00:15:29.330 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:29.330 "strip_size_kb": 0, 00:15:29.330 "state": "online", 00:15:29.330 "raid_level": "raid1", 00:15:29.330 "superblock": true, 00:15:29.330 "num_base_bdevs": 4, 00:15:29.330 "num_base_bdevs_discovered": 3, 00:15:29.330 "num_base_bdevs_operational": 3, 00:15:29.330 "base_bdevs_list": [ 00:15:29.330 { 00:15:29.330 "name": "spare", 00:15:29.330 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:29.330 "is_configured": true, 00:15:29.330 "data_offset": 2048, 00:15:29.330 "data_size": 63488 00:15:29.330 }, 00:15:29.330 { 00:15:29.330 "name": null, 00:15:29.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.330 "is_configured": false, 00:15:29.330 "data_offset": 2048, 00:15:29.330 "data_size": 63488 00:15:29.330 }, 00:15:29.330 { 00:15:29.330 "name": "BaseBdev3", 00:15:29.330 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:29.330 "is_configured": true, 00:15:29.330 "data_offset": 2048, 00:15:29.330 "data_size": 63488 00:15:29.330 }, 00:15:29.330 { 00:15:29.330 "name": "BaseBdev4", 00:15:29.330 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:29.330 "is_configured": true, 00:15:29.330 "data_offset": 2048, 00:15:29.330 "data_size": 63488 00:15:29.330 } 00:15:29.330 ] 00:15:29.330 }' 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.330 [2024-11-20 14:32:30.370512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.330 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.587 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.587 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.587 "name": "raid_bdev1", 00:15:29.587 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:29.587 "strip_size_kb": 0, 00:15:29.587 "state": "online", 00:15:29.587 "raid_level": "raid1", 00:15:29.587 "superblock": true, 00:15:29.587 "num_base_bdevs": 4, 00:15:29.587 "num_base_bdevs_discovered": 2, 00:15:29.587 "num_base_bdevs_operational": 2, 00:15:29.587 "base_bdevs_list": [ 00:15:29.587 { 00:15:29.587 "name": null, 00:15:29.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.587 "is_configured": false, 00:15:29.587 "data_offset": 0, 00:15:29.587 "data_size": 63488 00:15:29.587 }, 00:15:29.587 { 00:15:29.587 "name": null, 00:15:29.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.587 "is_configured": false, 00:15:29.587 "data_offset": 2048, 00:15:29.587 "data_size": 63488 00:15:29.587 }, 00:15:29.587 { 00:15:29.587 "name": "BaseBdev3", 00:15:29.587 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:29.587 "is_configured": true, 00:15:29.587 "data_offset": 2048, 00:15:29.587 "data_size": 63488 00:15:29.587 }, 00:15:29.588 { 00:15:29.588 "name": "BaseBdev4", 00:15:29.588 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:29.588 "is_configured": true, 00:15:29.588 "data_offset": 2048, 00:15:29.588 "data_size": 63488 00:15:29.588 } 00:15:29.588 ] 00:15:29.588 }' 00:15:29.588 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.588 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.846 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.846 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.846 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.846 [2024-11-20 14:32:30.874690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.846 [2024-11-20 14:32:30.874972] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:29.846 [2024-11-20 14:32:30.874995] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:29.846 [2024-11-20 14:32:30.875055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.846 [2024-11-20 14:32:30.888460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:29.846 14:32:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.846 14:32:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:29.846 [2024-11-20 14:32:30.891181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.218 "name": "raid_bdev1", 00:15:31.218 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:31.218 "strip_size_kb": 0, 00:15:31.218 "state": "online", 00:15:31.218 "raid_level": "raid1", 00:15:31.218 "superblock": true, 00:15:31.218 "num_base_bdevs": 4, 00:15:31.218 "num_base_bdevs_discovered": 3, 00:15:31.218 "num_base_bdevs_operational": 3, 00:15:31.218 "process": { 00:15:31.218 "type": "rebuild", 00:15:31.218 "target": "spare", 00:15:31.218 "progress": { 00:15:31.218 "blocks": 20480, 00:15:31.218 "percent": 32 00:15:31.218 } 00:15:31.218 }, 00:15:31.218 "base_bdevs_list": [ 00:15:31.218 { 00:15:31.218 "name": "spare", 00:15:31.218 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:31.218 "is_configured": true, 00:15:31.218 "data_offset": 2048, 00:15:31.218 "data_size": 63488 00:15:31.218 }, 00:15:31.218 { 00:15:31.218 "name": null, 00:15:31.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.218 "is_configured": false, 00:15:31.218 "data_offset": 2048, 00:15:31.218 "data_size": 63488 00:15:31.218 }, 00:15:31.218 { 00:15:31.218 "name": "BaseBdev3", 00:15:31.218 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:31.218 "is_configured": true, 00:15:31.218 "data_offset": 2048, 00:15:31.218 "data_size": 63488 00:15:31.218 }, 00:15:31.218 { 00:15:31.218 "name": "BaseBdev4", 00:15:31.218 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:31.218 "is_configured": true, 00:15:31.218 "data_offset": 2048, 00:15:31.218 "data_size": 63488 00:15:31.218 } 00:15:31.218 ] 00:15:31.218 }' 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.218 14:32:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.218 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.218 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:31.218 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.218 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.218 [2024-11-20 14:32:32.057029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.218 [2024-11-20 14:32:32.100961] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:31.218 [2024-11-20 14:32:32.101053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.219 [2024-11-20 14:32:32.101092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.219 [2024-11-20 14:32:32.101104] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.219 "name": "raid_bdev1", 00:15:31.219 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:31.219 "strip_size_kb": 0, 00:15:31.219 "state": "online", 00:15:31.219 "raid_level": "raid1", 00:15:31.219 "superblock": true, 00:15:31.219 "num_base_bdevs": 4, 00:15:31.219 "num_base_bdevs_discovered": 2, 00:15:31.219 "num_base_bdevs_operational": 2, 00:15:31.219 "base_bdevs_list": [ 00:15:31.219 { 00:15:31.219 "name": null, 00:15:31.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.219 "is_configured": false, 00:15:31.219 "data_offset": 0, 00:15:31.219 "data_size": 63488 00:15:31.219 }, 00:15:31.219 { 00:15:31.219 "name": null, 00:15:31.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.219 "is_configured": false, 00:15:31.219 "data_offset": 2048, 00:15:31.219 "data_size": 63488 00:15:31.219 }, 00:15:31.219 { 00:15:31.219 "name": "BaseBdev3", 00:15:31.219 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:31.219 "is_configured": true, 00:15:31.219 "data_offset": 2048, 00:15:31.219 "data_size": 63488 00:15:31.219 }, 00:15:31.219 { 00:15:31.219 "name": "BaseBdev4", 00:15:31.219 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:31.219 "is_configured": true, 00:15:31.219 "data_offset": 2048, 00:15:31.219 "data_size": 63488 00:15:31.219 } 00:15:31.219 ] 00:15:31.219 }' 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.219 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.784 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.784 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.784 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.784 [2024-11-20 14:32:32.657010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.784 [2024-11-20 14:32:32.657108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.784 [2024-11-20 14:32:32.657156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:31.784 [2024-11-20 14:32:32.657174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.784 [2024-11-20 14:32:32.657838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.784 [2024-11-20 14:32:32.657876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.784 [2024-11-20 14:32:32.658006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:31.784 [2024-11-20 14:32:32.658026] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:31.784 [2024-11-20 14:32:32.658046] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:31.784 [2024-11-20 14:32:32.658080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.784 [2024-11-20 14:32:32.671376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:31.784 spare 00:15:31.784 14:32:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.784 14:32:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:31.784 [2024-11-20 14:32:32.673908] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.717 "name": "raid_bdev1", 00:15:32.717 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:32.717 "strip_size_kb": 0, 00:15:32.717 "state": "online", 00:15:32.717 "raid_level": "raid1", 00:15:32.717 "superblock": true, 00:15:32.717 "num_base_bdevs": 4, 00:15:32.717 "num_base_bdevs_discovered": 3, 00:15:32.717 "num_base_bdevs_operational": 3, 00:15:32.717 "process": { 00:15:32.717 "type": "rebuild", 00:15:32.717 "target": "spare", 00:15:32.717 "progress": { 00:15:32.717 "blocks": 20480, 00:15:32.717 "percent": 32 00:15:32.717 } 00:15:32.717 }, 00:15:32.717 "base_bdevs_list": [ 00:15:32.717 { 00:15:32.717 "name": "spare", 00:15:32.717 "uuid": "d8aa727b-6c6b-5c17-b5ad-0215aa8ee1fd", 00:15:32.717 "is_configured": true, 00:15:32.717 "data_offset": 2048, 00:15:32.717 "data_size": 63488 00:15:32.717 }, 00:15:32.717 { 00:15:32.717 "name": null, 00:15:32.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.717 "is_configured": false, 00:15:32.717 "data_offset": 2048, 00:15:32.717 "data_size": 63488 00:15:32.717 }, 00:15:32.717 { 00:15:32.717 "name": "BaseBdev3", 00:15:32.717 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:32.717 "is_configured": true, 00:15:32.717 "data_offset": 2048, 00:15:32.717 "data_size": 63488 00:15:32.717 }, 00:15:32.717 { 00:15:32.717 "name": "BaseBdev4", 00:15:32.717 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:32.717 "is_configured": true, 00:15:32.717 "data_offset": 2048, 00:15:32.717 "data_size": 63488 00:15:32.717 } 00:15:32.717 ] 00:15:32.717 }' 00:15:32.717 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.975 [2024-11-20 14:32:33.839687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.975 [2024-11-20 14:32:33.883605] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.975 [2024-11-20 14:32:33.883946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.975 [2024-11-20 14:32:33.884074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.975 [2024-11-20 14:32:33.884131] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.975 "name": "raid_bdev1", 00:15:32.975 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:32.975 "strip_size_kb": 0, 00:15:32.975 "state": "online", 00:15:32.975 "raid_level": "raid1", 00:15:32.975 "superblock": true, 00:15:32.975 "num_base_bdevs": 4, 00:15:32.975 "num_base_bdevs_discovered": 2, 00:15:32.975 "num_base_bdevs_operational": 2, 00:15:32.975 "base_bdevs_list": [ 00:15:32.975 { 00:15:32.975 "name": null, 00:15:32.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.975 "is_configured": false, 00:15:32.975 "data_offset": 0, 00:15:32.975 "data_size": 63488 00:15:32.975 }, 00:15:32.975 { 00:15:32.975 "name": null, 00:15:32.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.975 "is_configured": false, 00:15:32.975 "data_offset": 2048, 00:15:32.975 "data_size": 63488 00:15:32.975 }, 00:15:32.975 { 00:15:32.975 "name": "BaseBdev3", 00:15:32.975 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:32.975 "is_configured": true, 00:15:32.975 "data_offset": 2048, 00:15:32.975 "data_size": 63488 00:15:32.975 }, 00:15:32.975 { 00:15:32.975 "name": "BaseBdev4", 00:15:32.975 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:32.975 "is_configured": true, 00:15:32.975 "data_offset": 2048, 00:15:32.975 "data_size": 63488 00:15:32.975 } 00:15:32.975 ] 00:15:32.975 }' 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.975 14:32:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.541 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.541 "name": "raid_bdev1", 00:15:33.541 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:33.541 "strip_size_kb": 0, 00:15:33.541 "state": "online", 00:15:33.541 "raid_level": "raid1", 00:15:33.541 "superblock": true, 00:15:33.541 "num_base_bdevs": 4, 00:15:33.541 "num_base_bdevs_discovered": 2, 00:15:33.541 "num_base_bdevs_operational": 2, 00:15:33.541 "base_bdevs_list": [ 00:15:33.541 { 00:15:33.541 "name": null, 00:15:33.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.541 "is_configured": false, 00:15:33.541 "data_offset": 0, 00:15:33.541 "data_size": 63488 00:15:33.541 }, 00:15:33.541 { 00:15:33.541 "name": null, 00:15:33.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.541 "is_configured": false, 00:15:33.541 "data_offset": 2048, 00:15:33.541 "data_size": 63488 00:15:33.541 }, 00:15:33.541 { 00:15:33.541 "name": "BaseBdev3", 00:15:33.541 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:33.541 "is_configured": true, 00:15:33.541 "data_offset": 2048, 00:15:33.541 "data_size": 63488 00:15:33.541 }, 00:15:33.541 { 00:15:33.542 "name": "BaseBdev4", 00:15:33.542 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:33.542 "is_configured": true, 00:15:33.542 "data_offset": 2048, 00:15:33.542 "data_size": 63488 00:15:33.542 } 00:15:33.542 ] 00:15:33.542 }' 00:15:33.542 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.542 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.542 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.800 [2024-11-20 14:32:34.612501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.800 [2024-11-20 14:32:34.612594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.800 [2024-11-20 14:32:34.612640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:33.800 [2024-11-20 14:32:34.612677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.800 [2024-11-20 14:32:34.613262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.800 [2024-11-20 14:32:34.613299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.800 [2024-11-20 14:32:34.613402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:33.800 [2024-11-20 14:32:34.613439] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:33.800 [2024-11-20 14:32:34.613451] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:33.800 [2024-11-20 14:32:34.613483] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:33.800 BaseBdev1 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.800 14:32:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.790 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.790 "name": "raid_bdev1", 00:15:34.790 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:34.790 "strip_size_kb": 0, 00:15:34.790 "state": "online", 00:15:34.790 "raid_level": "raid1", 00:15:34.790 "superblock": true, 00:15:34.790 "num_base_bdevs": 4, 00:15:34.790 "num_base_bdevs_discovered": 2, 00:15:34.790 "num_base_bdevs_operational": 2, 00:15:34.790 "base_bdevs_list": [ 00:15:34.790 { 00:15:34.790 "name": null, 00:15:34.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.790 "is_configured": false, 00:15:34.790 "data_offset": 0, 00:15:34.790 "data_size": 63488 00:15:34.790 }, 00:15:34.790 { 00:15:34.790 "name": null, 00:15:34.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.790 "is_configured": false, 00:15:34.790 "data_offset": 2048, 00:15:34.790 "data_size": 63488 00:15:34.790 }, 00:15:34.790 { 00:15:34.791 "name": "BaseBdev3", 00:15:34.791 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:34.791 "is_configured": true, 00:15:34.791 "data_offset": 2048, 00:15:34.791 "data_size": 63488 00:15:34.791 }, 00:15:34.791 { 00:15:34.791 "name": "BaseBdev4", 00:15:34.791 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:34.791 "is_configured": true, 00:15:34.791 "data_offset": 2048, 00:15:34.791 "data_size": 63488 00:15:34.791 } 00:15:34.791 ] 00:15:34.791 }' 00:15:34.791 14:32:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.791 14:32:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.357 "name": "raid_bdev1", 00:15:35.357 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:35.357 "strip_size_kb": 0, 00:15:35.357 "state": "online", 00:15:35.357 "raid_level": "raid1", 00:15:35.357 "superblock": true, 00:15:35.357 "num_base_bdevs": 4, 00:15:35.357 "num_base_bdevs_discovered": 2, 00:15:35.357 "num_base_bdevs_operational": 2, 00:15:35.357 "base_bdevs_list": [ 00:15:35.357 { 00:15:35.357 "name": null, 00:15:35.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.357 "is_configured": false, 00:15:35.357 "data_offset": 0, 00:15:35.357 "data_size": 63488 00:15:35.357 }, 00:15:35.357 { 00:15:35.357 "name": null, 00:15:35.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.357 "is_configured": false, 00:15:35.357 "data_offset": 2048, 00:15:35.357 "data_size": 63488 00:15:35.357 }, 00:15:35.357 { 00:15:35.357 "name": "BaseBdev3", 00:15:35.357 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:35.357 "is_configured": true, 00:15:35.357 "data_offset": 2048, 00:15:35.357 "data_size": 63488 00:15:35.357 }, 00:15:35.357 { 00:15:35.357 "name": "BaseBdev4", 00:15:35.357 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:35.357 "is_configured": true, 00:15:35.357 "data_offset": 2048, 00:15:35.357 "data_size": 63488 00:15:35.357 } 00:15:35.357 ] 00:15:35.357 }' 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.357 [2024-11-20 14:32:36.305050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.357 [2024-11-20 14:32:36.305395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:35.357 [2024-11-20 14:32:36.305419] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:35.357 request: 00:15:35.357 { 00:15:35.357 "base_bdev": "BaseBdev1", 00:15:35.357 "raid_bdev": "raid_bdev1", 00:15:35.357 "method": "bdev_raid_add_base_bdev", 00:15:35.357 "req_id": 1 00:15:35.357 } 00:15:35.357 Got JSON-RPC error response 00:15:35.357 response: 00:15:35.357 { 00:15:35.357 "code": -22, 00:15:35.357 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:35.357 } 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:35.357 14:32:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.292 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.608 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.608 "name": "raid_bdev1", 00:15:36.608 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:36.608 "strip_size_kb": 0, 00:15:36.608 "state": "online", 00:15:36.608 "raid_level": "raid1", 00:15:36.608 "superblock": true, 00:15:36.608 "num_base_bdevs": 4, 00:15:36.608 "num_base_bdevs_discovered": 2, 00:15:36.608 "num_base_bdevs_operational": 2, 00:15:36.608 "base_bdevs_list": [ 00:15:36.608 { 00:15:36.608 "name": null, 00:15:36.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.608 "is_configured": false, 00:15:36.608 "data_offset": 0, 00:15:36.608 "data_size": 63488 00:15:36.608 }, 00:15:36.608 { 00:15:36.608 "name": null, 00:15:36.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.608 "is_configured": false, 00:15:36.608 "data_offset": 2048, 00:15:36.608 "data_size": 63488 00:15:36.608 }, 00:15:36.608 { 00:15:36.608 "name": "BaseBdev3", 00:15:36.608 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:36.608 "is_configured": true, 00:15:36.608 "data_offset": 2048, 00:15:36.608 "data_size": 63488 00:15:36.608 }, 00:15:36.608 { 00:15:36.608 "name": "BaseBdev4", 00:15:36.608 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:36.608 "is_configured": true, 00:15:36.608 "data_offset": 2048, 00:15:36.608 "data_size": 63488 00:15:36.608 } 00:15:36.608 ] 00:15:36.608 }' 00:15:36.608 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.608 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.866 "name": "raid_bdev1", 00:15:36.866 "uuid": "e9dc3279-aad5-4a0f-808f-6d303436b3ed", 00:15:36.866 "strip_size_kb": 0, 00:15:36.866 "state": "online", 00:15:36.866 "raid_level": "raid1", 00:15:36.866 "superblock": true, 00:15:36.866 "num_base_bdevs": 4, 00:15:36.866 "num_base_bdevs_discovered": 2, 00:15:36.866 "num_base_bdevs_operational": 2, 00:15:36.866 "base_bdevs_list": [ 00:15:36.866 { 00:15:36.866 "name": null, 00:15:36.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.866 "is_configured": false, 00:15:36.866 "data_offset": 0, 00:15:36.866 "data_size": 63488 00:15:36.866 }, 00:15:36.866 { 00:15:36.866 "name": null, 00:15:36.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.866 "is_configured": false, 00:15:36.866 "data_offset": 2048, 00:15:36.866 "data_size": 63488 00:15:36.866 }, 00:15:36.866 { 00:15:36.866 "name": "BaseBdev3", 00:15:36.866 "uuid": "edb4ebcf-412b-5bb0-990b-e9c460ad615f", 00:15:36.866 "is_configured": true, 00:15:36.866 "data_offset": 2048, 00:15:36.866 "data_size": 63488 00:15:36.866 }, 00:15:36.866 { 00:15:36.866 "name": "BaseBdev4", 00:15:36.866 "uuid": "ce60b3b8-5782-54ce-b013-53fbd72266c2", 00:15:36.866 "is_configured": true, 00:15:36.866 "data_offset": 2048, 00:15:36.866 "data_size": 63488 00:15:36.866 } 00:15:36.866 ] 00:15:36.866 }' 00:15:36.866 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.124 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.124 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.124 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.124 14:32:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78359 00:15:37.124 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78359 ']' 00:15:37.124 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78359 00:15:37.124 14:32:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:37.124 14:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.124 14:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78359 00:15:37.124 killing process with pid 78359 00:15:37.124 Received shutdown signal, test time was about 60.000000 seconds 00:15:37.124 00:15:37.124 Latency(us) 00:15:37.124 [2024-11-20T14:32:38.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.124 [2024-11-20T14:32:38.181Z] =================================================================================================================== 00:15:37.124 [2024-11-20T14:32:38.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:37.124 14:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.124 14:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.124 14:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78359' 00:15:37.124 14:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78359 00:15:37.124 [2024-11-20 14:32:38.032704] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.124 14:32:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78359 00:15:37.124 [2024-11-20 14:32:38.032884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.124 [2024-11-20 14:32:38.033008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.124 [2024-11-20 14:32:38.033028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:37.690 [2024-11-20 14:32:38.480998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:38.625 00:15:38.625 real 0m29.701s 00:15:38.625 user 0m35.340s 00:15:38.625 sys 0m4.134s 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 ************************************ 00:15:38.625 END TEST raid_rebuild_test_sb 00:15:38.625 ************************************ 00:15:38.625 14:32:39 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:38.625 14:32:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:38.625 14:32:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.625 14:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.625 ************************************ 00:15:38.625 START TEST raid_rebuild_test_io 00:15:38.625 ************************************ 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.625 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79157 00:15:38.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79157 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79157 ']' 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.626 14:32:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.883 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.883 Zero copy mechanism will not be used. 00:15:38.883 [2024-11-20 14:32:39.737122] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:15:38.883 [2024-11-20 14:32:39.737313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79157 ] 00:15:38.883 [2024-11-20 14:32:39.928218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.141 [2024-11-20 14:32:40.084050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.399 [2024-11-20 14:32:40.302062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.399 [2024-11-20 14:32:40.302162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.656 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.656 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:39.656 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.656 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.656 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.656 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.914 BaseBdev1_malloc 00:15:39.914 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.914 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.914 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.914 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 [2024-11-20 14:32:40.743170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.915 [2024-11-20 14:32:40.743444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.915 [2024-11-20 14:32:40.743524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:39.915 [2024-11-20 14:32:40.743693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.915 [2024-11-20 14:32:40.746495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.915 [2024-11-20 14:32:40.746546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.915 BaseBdev1 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 BaseBdev2_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 [2024-11-20 14:32:40.795417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:39.915 [2024-11-20 14:32:40.795512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.915 [2024-11-20 14:32:40.795547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:39.915 [2024-11-20 14:32:40.795567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.915 [2024-11-20 14:32:40.798418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.915 [2024-11-20 14:32:40.798467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.915 BaseBdev2 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 BaseBdev3_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 [2024-11-20 14:32:40.851692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:39.915 [2024-11-20 14:32:40.851773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.915 [2024-11-20 14:32:40.851806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:39.915 [2024-11-20 14:32:40.851827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.915 [2024-11-20 14:32:40.854586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.915 [2024-11-20 14:32:40.854648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:39.915 BaseBdev3 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 BaseBdev4_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 [2024-11-20 14:32:40.900189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:39.915 [2024-11-20 14:32:40.900288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.915 [2024-11-20 14:32:40.900328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:39.915 [2024-11-20 14:32:40.900349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.915 [2024-11-20 14:32:40.903089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.915 [2024-11-20 14:32:40.903140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:39.915 BaseBdev4 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 spare_malloc 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 spare_delay 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 [2024-11-20 14:32:40.958553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.915 [2024-11-20 14:32:40.958653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.915 [2024-11-20 14:32:40.958683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:39.915 [2024-11-20 14:32:40.958702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.915 [2024-11-20 14:32:40.961496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.915 [2024-11-20 14:32:40.961545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.915 spare 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.915 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.915 [2024-11-20 14:32:40.966605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.173 [2024-11-20 14:32:40.969244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.173 [2024-11-20 14:32:40.969341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.173 [2024-11-20 14:32:40.969423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:40.173 [2024-11-20 14:32:40.969533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:40.173 [2024-11-20 14:32:40.969556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:40.173 [2024-11-20 14:32:40.969898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:40.173 [2024-11-20 14:32:40.970123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:40.173 [2024-11-20 14:32:40.970286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:40.173 [2024-11-20 14:32:40.970546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.173 14:32:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.173 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.173 "name": "raid_bdev1", 00:15:40.173 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:40.174 "strip_size_kb": 0, 00:15:40.174 "state": "online", 00:15:40.174 "raid_level": "raid1", 00:15:40.174 "superblock": false, 00:15:40.174 "num_base_bdevs": 4, 00:15:40.174 "num_base_bdevs_discovered": 4, 00:15:40.174 "num_base_bdevs_operational": 4, 00:15:40.174 "base_bdevs_list": [ 00:15:40.174 { 00:15:40.174 "name": "BaseBdev1", 00:15:40.174 "uuid": "837f1bb8-a549-5b7d-bb54-9656f2ddad05", 00:15:40.174 "is_configured": true, 00:15:40.174 "data_offset": 0, 00:15:40.174 "data_size": 65536 00:15:40.174 }, 00:15:40.174 { 00:15:40.174 "name": "BaseBdev2", 00:15:40.174 "uuid": "3dba3e11-09ec-5763-9bd9-32dc8f623ee4", 00:15:40.174 "is_configured": true, 00:15:40.174 "data_offset": 0, 00:15:40.174 "data_size": 65536 00:15:40.174 }, 00:15:40.174 { 00:15:40.174 "name": "BaseBdev3", 00:15:40.174 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:40.174 "is_configured": true, 00:15:40.174 "data_offset": 0, 00:15:40.174 "data_size": 65536 00:15:40.174 }, 00:15:40.174 { 00:15:40.174 "name": "BaseBdev4", 00:15:40.174 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:40.174 "is_configured": true, 00:15:40.174 "data_offset": 0, 00:15:40.174 "data_size": 65536 00:15:40.174 } 00:15:40.174 ] 00:15:40.174 }' 00:15:40.174 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.174 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.431 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.431 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.431 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.431 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:40.431 [2024-11-20 14:32:41.467251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.690 [2024-11-20 14:32:41.570895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.690 "name": "raid_bdev1", 00:15:40.690 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:40.690 "strip_size_kb": 0, 00:15:40.690 "state": "online", 00:15:40.690 "raid_level": "raid1", 00:15:40.690 "superblock": false, 00:15:40.690 "num_base_bdevs": 4, 00:15:40.690 "num_base_bdevs_discovered": 3, 00:15:40.690 "num_base_bdevs_operational": 3, 00:15:40.690 "base_bdevs_list": [ 00:15:40.690 { 00:15:40.690 "name": null, 00:15:40.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.690 "is_configured": false, 00:15:40.690 "data_offset": 0, 00:15:40.690 "data_size": 65536 00:15:40.690 }, 00:15:40.690 { 00:15:40.690 "name": "BaseBdev2", 00:15:40.690 "uuid": "3dba3e11-09ec-5763-9bd9-32dc8f623ee4", 00:15:40.690 "is_configured": true, 00:15:40.690 "data_offset": 0, 00:15:40.690 "data_size": 65536 00:15:40.690 }, 00:15:40.690 { 00:15:40.690 "name": "BaseBdev3", 00:15:40.690 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:40.690 "is_configured": true, 00:15:40.690 "data_offset": 0, 00:15:40.690 "data_size": 65536 00:15:40.690 }, 00:15:40.690 { 00:15:40.690 "name": "BaseBdev4", 00:15:40.690 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:40.690 "is_configured": true, 00:15:40.690 "data_offset": 0, 00:15:40.690 "data_size": 65536 00:15:40.690 } 00:15:40.690 ] 00:15:40.690 }' 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.690 14:32:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.948 [2024-11-20 14:32:41.747061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:40.948 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:40.948 Zero copy mechanism will not be used. 00:15:40.948 Running I/O for 60 seconds... 00:15:41.206 14:32:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.206 14:32:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.206 14:32:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.206 [2024-11-20 14:32:42.157656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.206 14:32:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.206 14:32:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.206 [2024-11-20 14:32:42.232193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:41.206 [2024-11-20 14:32:42.235028] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.464 [2024-11-20 14:32:42.356362] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:41.464 [2024-11-20 14:32:42.357981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:41.722 [2024-11-20 14:32:42.598552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:41.722 [2024-11-20 14:32:42.598987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:41.979 132.00 IOPS, 396.00 MiB/s [2024-11-20T14:32:43.036Z] [2024-11-20 14:32:42.845599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:42.237 [2024-11-20 14:32:43.070664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:42.237 [2024-11-20 14:32:43.071069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.237 "name": "raid_bdev1", 00:15:42.237 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:42.237 "strip_size_kb": 0, 00:15:42.237 "state": "online", 00:15:42.237 "raid_level": "raid1", 00:15:42.237 "superblock": false, 00:15:42.237 "num_base_bdevs": 4, 00:15:42.237 "num_base_bdevs_discovered": 4, 00:15:42.237 "num_base_bdevs_operational": 4, 00:15:42.237 "process": { 00:15:42.237 "type": "rebuild", 00:15:42.237 "target": "spare", 00:15:42.237 "progress": { 00:15:42.237 "blocks": 12288, 00:15:42.237 "percent": 18 00:15:42.237 } 00:15:42.237 }, 00:15:42.237 "base_bdevs_list": [ 00:15:42.237 { 00:15:42.237 "name": "spare", 00:15:42.237 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev2", 00:15:42.237 "uuid": "3dba3e11-09ec-5763-9bd9-32dc8f623ee4", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev3", 00:15:42.237 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev4", 00:15:42.237 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 65536 00:15:42.237 } 00:15:42.237 ] 00:15:42.237 }' 00:15:42.237 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.495 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.495 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.495 [2024-11-20 14:32:43.344422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:42.495 [2024-11-20 14:32:43.346315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:42.495 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.495 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.495 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.495 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.495 [2024-11-20 14:32:43.367996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.495 [2024-11-20 14:32:43.448873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:42.495 [2024-11-20 14:32:43.449721] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:42.495 [2024-11-20 14:32:43.458446] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.495 [2024-11-20 14:32:43.471488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.495 [2024-11-20 14:32:43.471704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.495 [2024-11-20 14:32:43.471765] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.495 [2024-11-20 14:32:43.512654] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.496 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.754 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.754 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.754 "name": "raid_bdev1", 00:15:42.754 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:42.754 "strip_size_kb": 0, 00:15:42.754 "state": "online", 00:15:42.754 "raid_level": "raid1", 00:15:42.754 "superblock": false, 00:15:42.754 "num_base_bdevs": 4, 00:15:42.754 "num_base_bdevs_discovered": 3, 00:15:42.754 "num_base_bdevs_operational": 3, 00:15:42.754 "base_bdevs_list": [ 00:15:42.754 { 00:15:42.754 "name": null, 00:15:42.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.754 "is_configured": false, 00:15:42.754 "data_offset": 0, 00:15:42.754 "data_size": 65536 00:15:42.754 }, 00:15:42.754 { 00:15:42.754 "name": "BaseBdev2", 00:15:42.754 "uuid": "3dba3e11-09ec-5763-9bd9-32dc8f623ee4", 00:15:42.754 "is_configured": true, 00:15:42.754 "data_offset": 0, 00:15:42.754 "data_size": 65536 00:15:42.754 }, 00:15:42.754 { 00:15:42.754 "name": "BaseBdev3", 00:15:42.754 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:42.754 "is_configured": true, 00:15:42.754 "data_offset": 0, 00:15:42.754 "data_size": 65536 00:15:42.754 }, 00:15:42.754 { 00:15:42.754 "name": "BaseBdev4", 00:15:42.754 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:42.754 "is_configured": true, 00:15:42.754 "data_offset": 0, 00:15:42.754 "data_size": 65536 00:15:42.754 } 00:15:42.754 ] 00:15:42.754 }' 00:15:42.754 14:32:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.754 14:32:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.011 110.00 IOPS, 330.00 MiB/s [2024-11-20T14:32:44.068Z] 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.011 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.269 14:32:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.269 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.269 "name": "raid_bdev1", 00:15:43.269 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:43.269 "strip_size_kb": 0, 00:15:43.269 "state": "online", 00:15:43.269 "raid_level": "raid1", 00:15:43.269 "superblock": false, 00:15:43.269 "num_base_bdevs": 4, 00:15:43.269 "num_base_bdevs_discovered": 3, 00:15:43.269 "num_base_bdevs_operational": 3, 00:15:43.269 "base_bdevs_list": [ 00:15:43.269 { 00:15:43.269 "name": null, 00:15:43.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.269 "is_configured": false, 00:15:43.269 "data_offset": 0, 00:15:43.269 "data_size": 65536 00:15:43.269 }, 00:15:43.269 { 00:15:43.269 "name": "BaseBdev2", 00:15:43.270 "uuid": "3dba3e11-09ec-5763-9bd9-32dc8f623ee4", 00:15:43.270 "is_configured": true, 00:15:43.270 "data_offset": 0, 00:15:43.270 "data_size": 65536 00:15:43.270 }, 00:15:43.270 { 00:15:43.270 "name": "BaseBdev3", 00:15:43.270 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:43.270 "is_configured": true, 00:15:43.270 "data_offset": 0, 00:15:43.270 "data_size": 65536 00:15:43.270 }, 00:15:43.270 { 00:15:43.270 "name": "BaseBdev4", 00:15:43.270 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:43.270 "is_configured": true, 00:15:43.270 "data_offset": 0, 00:15:43.270 "data_size": 65536 00:15:43.270 } 00:15:43.270 ] 00:15:43.270 }' 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.270 [2024-11-20 14:32:44.220090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.270 14:32:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:43.270 [2024-11-20 14:32:44.314421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:43.270 [2024-11-20 14:32:44.317111] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.528 [2024-11-20 14:32:44.444576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.528 [2024-11-20 14:32:44.446230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.786 [2024-11-20 14:32:44.660440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.786 [2024-11-20 14:32:44.661340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:44.387 112.00 IOPS, 336.00 MiB/s [2024-11-20T14:32:45.444Z] [2024-11-20 14:32:45.164188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.387 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.387 "name": "raid_bdev1", 00:15:44.387 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:44.387 "strip_size_kb": 0, 00:15:44.387 "state": "online", 00:15:44.387 "raid_level": "raid1", 00:15:44.387 "superblock": false, 00:15:44.387 "num_base_bdevs": 4, 00:15:44.387 "num_base_bdevs_discovered": 4, 00:15:44.387 "num_base_bdevs_operational": 4, 00:15:44.387 "process": { 00:15:44.387 "type": "rebuild", 00:15:44.387 "target": "spare", 00:15:44.387 "progress": { 00:15:44.387 "blocks": 12288, 00:15:44.387 "percent": 18 00:15:44.387 } 00:15:44.387 }, 00:15:44.387 "base_bdevs_list": [ 00:15:44.387 { 00:15:44.387 "name": "spare", 00:15:44.387 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 0, 00:15:44.387 "data_size": 65536 00:15:44.387 }, 00:15:44.387 { 00:15:44.387 "name": "BaseBdev2", 00:15:44.387 "uuid": "3dba3e11-09ec-5763-9bd9-32dc8f623ee4", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 0, 00:15:44.387 "data_size": 65536 00:15:44.387 }, 00:15:44.387 { 00:15:44.387 "name": "BaseBdev3", 00:15:44.387 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:44.387 "is_configured": true, 00:15:44.387 "data_offset": 0, 00:15:44.388 "data_size": 65536 00:15:44.388 }, 00:15:44.388 { 00:15:44.388 "name": "BaseBdev4", 00:15:44.388 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:44.388 "is_configured": true, 00:15:44.388 "data_offset": 0, 00:15:44.388 "data_size": 65536 00:15:44.388 } 00:15:44.388 ] 00:15:44.388 }' 00:15:44.388 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.388 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.388 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.388 [2024-11-20 14:32:45.409370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.651 [2024-11-20 14:32:45.441477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:44.651 [2024-11-20 14:32:45.669930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:44.651 [2024-11-20 14:32:45.678781] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:44.651 [2024-11-20 14:32:45.678825] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:44.651 [2024-11-20 14:32:45.678898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.651 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.909 "name": "raid_bdev1", 00:15:44.909 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:44.909 "strip_size_kb": 0, 00:15:44.909 "state": "online", 00:15:44.909 "raid_level": "raid1", 00:15:44.909 "superblock": false, 00:15:44.909 "num_base_bdevs": 4, 00:15:44.909 "num_base_bdevs_discovered": 3, 00:15:44.909 "num_base_bdevs_operational": 3, 00:15:44.909 "process": { 00:15:44.909 "type": "rebuild", 00:15:44.909 "target": "spare", 00:15:44.909 "progress": { 00:15:44.909 "blocks": 16384, 00:15:44.909 "percent": 25 00:15:44.909 } 00:15:44.909 }, 00:15:44.909 "base_bdevs_list": [ 00:15:44.909 { 00:15:44.909 "name": "spare", 00:15:44.909 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:44.909 "is_configured": true, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 }, 00:15:44.909 { 00:15:44.909 "name": null, 00:15:44.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.909 "is_configured": false, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 }, 00:15:44.909 { 00:15:44.909 "name": "BaseBdev3", 00:15:44.909 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:44.909 "is_configured": true, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 }, 00:15:44.909 { 00:15:44.909 "name": "BaseBdev4", 00:15:44.909 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:44.909 "is_configured": true, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 } 00:15:44.909 ] 00:15:44.909 }' 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.909 107.25 IOPS, 321.75 MiB/s [2024-11-20T14:32:45.966Z] 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=527 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.909 "name": "raid_bdev1", 00:15:44.909 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:44.909 "strip_size_kb": 0, 00:15:44.909 "state": "online", 00:15:44.909 "raid_level": "raid1", 00:15:44.909 "superblock": false, 00:15:44.909 "num_base_bdevs": 4, 00:15:44.909 "num_base_bdevs_discovered": 3, 00:15:44.909 "num_base_bdevs_operational": 3, 00:15:44.909 "process": { 00:15:44.909 "type": "rebuild", 00:15:44.909 "target": "spare", 00:15:44.909 "progress": { 00:15:44.909 "blocks": 18432, 00:15:44.909 "percent": 28 00:15:44.909 } 00:15:44.909 }, 00:15:44.909 "base_bdevs_list": [ 00:15:44.909 { 00:15:44.909 "name": "spare", 00:15:44.909 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:44.909 "is_configured": true, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 }, 00:15:44.909 { 00:15:44.909 "name": null, 00:15:44.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.909 "is_configured": false, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 }, 00:15:44.909 { 00:15:44.909 "name": "BaseBdev3", 00:15:44.909 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:44.909 "is_configured": true, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 }, 00:15:44.909 { 00:15:44.909 "name": "BaseBdev4", 00:15:44.909 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:44.909 "is_configured": true, 00:15:44.909 "data_offset": 0, 00:15:44.909 "data_size": 65536 00:15:44.909 } 00:15:44.909 ] 00:15:44.909 }' 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.909 [2024-11-20 14:32:45.958254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.909 14:32:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.167 14:32:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.167 14:32:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.167 [2024-11-20 14:32:46.190360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:45.732 [2024-11-20 14:32:46.534048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:45.990 96.00 IOPS, 288.00 MiB/s [2024-11-20T14:32:47.047Z] [2024-11-20 14:32:46.910826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:45.990 [2024-11-20 14:32:46.911703] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.990 14:32:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.248 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.248 "name": "raid_bdev1", 00:15:46.248 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:46.248 "strip_size_kb": 0, 00:15:46.248 "state": "online", 00:15:46.248 "raid_level": "raid1", 00:15:46.248 "superblock": false, 00:15:46.248 "num_base_bdevs": 4, 00:15:46.248 "num_base_bdevs_discovered": 3, 00:15:46.248 "num_base_bdevs_operational": 3, 00:15:46.248 "process": { 00:15:46.248 "type": "rebuild", 00:15:46.248 "target": "spare", 00:15:46.248 "progress": { 00:15:46.248 "blocks": 34816, 00:15:46.248 "percent": 53 00:15:46.248 } 00:15:46.248 }, 00:15:46.248 "base_bdevs_list": [ 00:15:46.248 { 00:15:46.248 "name": "spare", 00:15:46.248 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:46.248 "is_configured": true, 00:15:46.248 "data_offset": 0, 00:15:46.248 "data_size": 65536 00:15:46.248 }, 00:15:46.248 { 00:15:46.248 "name": null, 00:15:46.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.248 "is_configured": false, 00:15:46.248 "data_offset": 0, 00:15:46.248 "data_size": 65536 00:15:46.248 }, 00:15:46.248 { 00:15:46.248 "name": "BaseBdev3", 00:15:46.248 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:46.248 "is_configured": true, 00:15:46.248 "data_offset": 0, 00:15:46.248 "data_size": 65536 00:15:46.248 }, 00:15:46.248 { 00:15:46.248 "name": "BaseBdev4", 00:15:46.248 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:46.248 "is_configured": true, 00:15:46.248 "data_offset": 0, 00:15:46.248 "data_size": 65536 00:15:46.248 } 00:15:46.248 ] 00:15:46.248 }' 00:15:46.248 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.248 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.248 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.248 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.248 14:32:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.506 [2024-11-20 14:32:47.377975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:47.022 86.17 IOPS, 258.50 MiB/s [2024-11-20T14:32:48.079Z] [2024-11-20 14:32:47.963474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.280 "name": "raid_bdev1", 00:15:47.280 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:47.280 "strip_size_kb": 0, 00:15:47.280 "state": "online", 00:15:47.280 "raid_level": "raid1", 00:15:47.280 "superblock": false, 00:15:47.280 "num_base_bdevs": 4, 00:15:47.280 "num_base_bdevs_discovered": 3, 00:15:47.280 "num_base_bdevs_operational": 3, 00:15:47.280 "process": { 00:15:47.280 "type": "rebuild", 00:15:47.280 "target": "spare", 00:15:47.280 "progress": { 00:15:47.280 "blocks": 55296, 00:15:47.280 "percent": 84 00:15:47.280 } 00:15:47.280 }, 00:15:47.280 "base_bdevs_list": [ 00:15:47.280 { 00:15:47.280 "name": "spare", 00:15:47.280 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:47.280 "is_configured": true, 00:15:47.280 "data_offset": 0, 00:15:47.280 "data_size": 65536 00:15:47.280 }, 00:15:47.280 { 00:15:47.280 "name": null, 00:15:47.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.280 "is_configured": false, 00:15:47.280 "data_offset": 0, 00:15:47.280 "data_size": 65536 00:15:47.280 }, 00:15:47.280 { 00:15:47.280 "name": "BaseBdev3", 00:15:47.280 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:47.280 "is_configured": true, 00:15:47.280 "data_offset": 0, 00:15:47.280 "data_size": 65536 00:15:47.280 }, 00:15:47.280 { 00:15:47.280 "name": "BaseBdev4", 00:15:47.280 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:47.280 "is_configured": true, 00:15:47.280 "data_offset": 0, 00:15:47.280 "data_size": 65536 00:15:47.280 } 00:15:47.280 ] 00:15:47.280 }' 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.280 [2024-11-20 14:32:48.282532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.280 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.538 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.538 14:32:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.796 [2024-11-20 14:32:48.738007] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:47.796 79.86 IOPS, 239.57 MiB/s [2024-11-20T14:32:48.853Z] [2024-11-20 14:32:48.838036] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:47.796 [2024-11-20 14:32:48.840219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.359 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.359 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.360 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.617 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.617 "name": "raid_bdev1", 00:15:48.617 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:48.617 "strip_size_kb": 0, 00:15:48.617 "state": "online", 00:15:48.617 "raid_level": "raid1", 00:15:48.617 "superblock": false, 00:15:48.617 "num_base_bdevs": 4, 00:15:48.618 "num_base_bdevs_discovered": 3, 00:15:48.618 "num_base_bdevs_operational": 3, 00:15:48.618 "base_bdevs_list": [ 00:15:48.618 { 00:15:48.618 "name": "spare", 00:15:48.618 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:48.618 "is_configured": true, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 }, 00:15:48.618 { 00:15:48.618 "name": null, 00:15:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.618 "is_configured": false, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 }, 00:15:48.618 { 00:15:48.618 "name": "BaseBdev3", 00:15:48.618 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:48.618 "is_configured": true, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 }, 00:15:48.618 { 00:15:48.618 "name": "BaseBdev4", 00:15:48.618 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:48.618 "is_configured": true, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 } 00:15:48.618 ] 00:15:48.618 }' 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.618 "name": "raid_bdev1", 00:15:48.618 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:48.618 "strip_size_kb": 0, 00:15:48.618 "state": "online", 00:15:48.618 "raid_level": "raid1", 00:15:48.618 "superblock": false, 00:15:48.618 "num_base_bdevs": 4, 00:15:48.618 "num_base_bdevs_discovered": 3, 00:15:48.618 "num_base_bdevs_operational": 3, 00:15:48.618 "base_bdevs_list": [ 00:15:48.618 { 00:15:48.618 "name": "spare", 00:15:48.618 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:48.618 "is_configured": true, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 }, 00:15:48.618 { 00:15:48.618 "name": null, 00:15:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.618 "is_configured": false, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 }, 00:15:48.618 { 00:15:48.618 "name": "BaseBdev3", 00:15:48.618 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:48.618 "is_configured": true, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 }, 00:15:48.618 { 00:15:48.618 "name": "BaseBdev4", 00:15:48.618 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:48.618 "is_configured": true, 00:15:48.618 "data_offset": 0, 00:15:48.618 "data_size": 65536 00:15:48.618 } 00:15:48.618 ] 00:15:48.618 }' 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.618 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.877 "name": "raid_bdev1", 00:15:48.877 "uuid": "1df8178c-5ce7-4fc9-9c0f-d1bc6ea7c7fc", 00:15:48.877 "strip_size_kb": 0, 00:15:48.877 "state": "online", 00:15:48.877 "raid_level": "raid1", 00:15:48.877 "superblock": false, 00:15:48.877 "num_base_bdevs": 4, 00:15:48.877 "num_base_bdevs_discovered": 3, 00:15:48.877 "num_base_bdevs_operational": 3, 00:15:48.877 "base_bdevs_list": [ 00:15:48.877 { 00:15:48.877 "name": "spare", 00:15:48.877 "uuid": "b559de0f-36b8-50ee-a968-87e8bf81dbe6", 00:15:48.877 "is_configured": true, 00:15:48.877 "data_offset": 0, 00:15:48.877 "data_size": 65536 00:15:48.877 }, 00:15:48.877 { 00:15:48.877 "name": null, 00:15:48.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.877 "is_configured": false, 00:15:48.877 "data_offset": 0, 00:15:48.877 "data_size": 65536 00:15:48.877 }, 00:15:48.877 { 00:15:48.877 "name": "BaseBdev3", 00:15:48.877 "uuid": "344c8c3e-eafa-52c6-b677-aa599f1a80be", 00:15:48.877 "is_configured": true, 00:15:48.877 "data_offset": 0, 00:15:48.877 "data_size": 65536 00:15:48.877 }, 00:15:48.877 { 00:15:48.877 "name": "BaseBdev4", 00:15:48.877 "uuid": "f4e86a3f-98e2-5c50-b8f5-3342f8b07d92", 00:15:48.877 "is_configured": true, 00:15:48.877 "data_offset": 0, 00:15:48.877 "data_size": 65536 00:15:48.877 } 00:15:48.877 ] 00:15:48.877 }' 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.877 14:32:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.135 75.50 IOPS, 226.50 MiB/s [2024-11-20T14:32:50.192Z] 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.135 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.135 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.135 [2024-11-20 14:32:50.183741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.135 [2024-11-20 14:32:50.183801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.393 00:15:49.393 Latency(us) 00:15:49.393 [2024-11-20T14:32:50.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.393 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:49.393 raid_bdev1 : 8.53 73.24 219.72 0.00 0.00 18262.73 283.00 113913.48 00:15:49.393 [2024-11-20T14:32:50.450Z] =================================================================================================================== 00:15:49.393 [2024-11-20T14:32:50.450Z] Total : 73.24 219.72 0.00 0.00 18262.73 283.00 113913.48 00:15:49.393 [2024-11-20 14:32:50.304158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.393 [2024-11-20 14:32:50.304236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.393 [2024-11-20 14:32:50.304411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.393 [2024-11-20 14:32:50.304430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:49.393 { 00:15:49.393 "results": [ 00:15:49.393 { 00:15:49.393 "job": "raid_bdev1", 00:15:49.393 "core_mask": "0x1", 00:15:49.393 "workload": "randrw", 00:15:49.393 "percentage": 50, 00:15:49.393 "status": "finished", 00:15:49.393 "queue_depth": 2, 00:15:49.393 "io_size": 3145728, 00:15:49.393 "runtime": 8.533566, 00:15:49.393 "iops": 73.24019056042926, 00:15:49.393 "mibps": 219.72057168128777, 00:15:49.393 "io_failed": 0, 00:15:49.393 "io_timeout": 0, 00:15:49.393 "avg_latency_us": 18262.732427636365, 00:15:49.393 "min_latency_us": 282.99636363636364, 00:15:49.393 "max_latency_us": 113913.48363636364 00:15:49.393 } 00:15:49.393 ], 00:15:49.393 "core_count": 1 00:15:49.393 } 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.393 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:49.684 /dev/nbd0 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.684 1+0 records in 00:15:49.684 1+0 records out 00:15:49.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432467 s, 9.5 MB/s 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.684 14:32:50 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:50.251 /dev/nbd1 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.251 1+0 records in 00:15:50.251 1+0 records out 00:15:50.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375907 s, 10.9 MB/s 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.251 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:50.517 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:50.777 /dev/nbd1 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.035 1+0 records in 00:15:51.035 1+0 records out 00:15:51.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621642 s, 6.6 MB/s 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.035 14:32:51 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.293 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:51.294 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.294 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79157 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79157 ']' 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79157 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79157 00:15:51.552 killing process with pid 79157 00:15:51.552 Received shutdown signal, test time was about 10.802964 seconds 00:15:51.552 00:15:51.552 Latency(us) 00:15:51.552 [2024-11-20T14:32:52.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.552 [2024-11-20T14:32:52.609Z] =================================================================================================================== 00:15:51.552 [2024-11-20T14:32:52.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79157' 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79157 00:15:51.552 14:32:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79157 00:15:51.552 [2024-11-20 14:32:52.553064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.119 [2024-11-20 14:32:52.933904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.054 00:15:53.054 real 0m14.439s 00:15:53.054 user 0m18.979s 00:15:53.054 sys 0m1.799s 00:15:53.054 ************************************ 00:15:53.054 END TEST raid_rebuild_test_io 00:15:53.054 ************************************ 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 14:32:54 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:53.054 14:32:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:53.054 14:32:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.054 14:32:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.054 ************************************ 00:15:53.054 START TEST raid_rebuild_test_sb_io 00:15:53.054 ************************************ 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:53.054 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:53.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79577 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79577 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79577 ']' 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.313 14:32:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.313 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.313 Zero copy mechanism will not be used. 00:15:53.313 [2024-11-20 14:32:54.216935] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:15:53.313 [2024-11-20 14:32:54.217126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79577 ] 00:15:53.571 [2024-11-20 14:32:54.406425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.571 [2024-11-20 14:32:54.563270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.829 [2024-11-20 14:32:54.784824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.829 [2024-11-20 14:32:54.784886] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.396 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.396 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:54.396 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.396 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 BaseBdev1_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 [2024-11-20 14:32:55.207808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.397 [2024-11-20 14:32:55.208154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.397 [2024-11-20 14:32:55.208199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:54.397 [2024-11-20 14:32:55.208219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.397 [2024-11-20 14:32:55.211071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.397 [2024-11-20 14:32:55.211124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.397 BaseBdev1 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 BaseBdev2_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 [2024-11-20 14:32:55.260481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:54.397 [2024-11-20 14:32:55.260576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.397 [2024-11-20 14:32:55.260612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:54.397 [2024-11-20 14:32:55.260650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.397 [2024-11-20 14:32:55.263693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.397 [2024-11-20 14:32:55.263741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.397 BaseBdev2 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 BaseBdev3_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 [2024-11-20 14:32:55.324552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:54.397 [2024-11-20 14:32:55.324660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.397 [2024-11-20 14:32:55.324697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:54.397 [2024-11-20 14:32:55.324717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.397 [2024-11-20 14:32:55.327669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.397 [2024-11-20 14:32:55.327720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.397 BaseBdev3 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 BaseBdev4_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 [2024-11-20 14:32:55.373851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:54.397 [2024-11-20 14:32:55.373940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.397 [2024-11-20 14:32:55.373972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:54.397 [2024-11-20 14:32:55.373991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.397 [2024-11-20 14:32:55.376745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.397 [2024-11-20 14:32:55.376988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:54.397 BaseBdev4 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 spare_malloc 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 spare_delay 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.397 [2024-11-20 14:32:55.442207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.397 [2024-11-20 14:32:55.442490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.397 [2024-11-20 14:32:55.442564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:54.397 [2024-11-20 14:32:55.442727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.397 [2024-11-20 14:32:55.445564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.397 [2024-11-20 14:32:55.445743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.397 spare 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.397 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.656 [2024-11-20 14:32:55.454275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.656 [2024-11-20 14:32:55.456896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.656 [2024-11-20 14:32:55.457108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.656 [2024-11-20 14:32:55.457241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.656 [2024-11-20 14:32:55.457611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:54.656 [2024-11-20 14:32:55.457695] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:54.656 [2024-11-20 14:32:55.458170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:54.656 [2024-11-20 14:32:55.458541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:54.656 [2024-11-20 14:32:55.458685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:54.656 [2024-11-20 14:32:55.459094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.656 "name": "raid_bdev1", 00:15:54.656 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:54.656 "strip_size_kb": 0, 00:15:54.656 "state": "online", 00:15:54.656 "raid_level": "raid1", 00:15:54.656 "superblock": true, 00:15:54.656 "num_base_bdevs": 4, 00:15:54.656 "num_base_bdevs_discovered": 4, 00:15:54.656 "num_base_bdevs_operational": 4, 00:15:54.656 "base_bdevs_list": [ 00:15:54.656 { 00:15:54.656 "name": "BaseBdev1", 00:15:54.656 "uuid": "7ded3de8-d792-51f1-8329-be7f6b50ea79", 00:15:54.656 "is_configured": true, 00:15:54.656 "data_offset": 2048, 00:15:54.656 "data_size": 63488 00:15:54.656 }, 00:15:54.656 { 00:15:54.656 "name": "BaseBdev2", 00:15:54.656 "uuid": "e58f4b28-71a7-5250-bde8-03dff8f502c1", 00:15:54.656 "is_configured": true, 00:15:54.656 "data_offset": 2048, 00:15:54.656 "data_size": 63488 00:15:54.656 }, 00:15:54.656 { 00:15:54.656 "name": "BaseBdev3", 00:15:54.656 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:54.656 "is_configured": true, 00:15:54.656 "data_offset": 2048, 00:15:54.656 "data_size": 63488 00:15:54.656 }, 00:15:54.656 { 00:15:54.656 "name": "BaseBdev4", 00:15:54.656 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:54.656 "is_configured": true, 00:15:54.656 "data_offset": 2048, 00:15:54.656 "data_size": 63488 00:15:54.656 } 00:15:54.656 ] 00:15:54.656 }' 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.656 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.222 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:55.222 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:55.222 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.222 14:32:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.222 [2024-11-20 14:32:55.999618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.222 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.222 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:55.222 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.222 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.223 [2024-11-20 14:32:56.103200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.223 "name": "raid_bdev1", 00:15:55.223 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:55.223 "strip_size_kb": 0, 00:15:55.223 "state": "online", 00:15:55.223 "raid_level": "raid1", 00:15:55.223 "superblock": true, 00:15:55.223 "num_base_bdevs": 4, 00:15:55.223 "num_base_bdevs_discovered": 3, 00:15:55.223 "num_base_bdevs_operational": 3, 00:15:55.223 "base_bdevs_list": [ 00:15:55.223 { 00:15:55.223 "name": null, 00:15:55.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.223 "is_configured": false, 00:15:55.223 "data_offset": 0, 00:15:55.223 "data_size": 63488 00:15:55.223 }, 00:15:55.223 { 00:15:55.223 "name": "BaseBdev2", 00:15:55.223 "uuid": "e58f4b28-71a7-5250-bde8-03dff8f502c1", 00:15:55.223 "is_configured": true, 00:15:55.223 "data_offset": 2048, 00:15:55.223 "data_size": 63488 00:15:55.223 }, 00:15:55.223 { 00:15:55.223 "name": "BaseBdev3", 00:15:55.223 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:55.223 "is_configured": true, 00:15:55.223 "data_offset": 2048, 00:15:55.223 "data_size": 63488 00:15:55.223 }, 00:15:55.223 { 00:15:55.223 "name": "BaseBdev4", 00:15:55.223 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:55.223 "is_configured": true, 00:15:55.223 "data_offset": 2048, 00:15:55.223 "data_size": 63488 00:15:55.223 } 00:15:55.223 ] 00:15:55.223 }' 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.223 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.223 [2024-11-20 14:32:56.211413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:55.223 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:55.223 Zero copy mechanism will not be used. 00:15:55.223 Running I/O for 60 seconds... 00:15:55.791 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.791 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.791 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.791 [2024-11-20 14:32:56.638939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.791 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.791 14:32:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:55.791 [2024-11-20 14:32:56.742842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:55.791 [2024-11-20 14:32:56.745813] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.050 [2024-11-20 14:32:56.867495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:56.050 [2024-11-20 14:32:56.868189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:56.050 [2024-11-20 14:32:57.010526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:56.050 [2024-11-20 14:32:57.011416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:56.567 138.00 IOPS, 414.00 MiB/s [2024-11-20T14:32:57.624Z] [2024-11-20 14:32:57.379413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:56.826 [2024-11-20 14:32:57.634232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.826 "name": "raid_bdev1", 00:15:56.826 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:56.826 "strip_size_kb": 0, 00:15:56.826 "state": "online", 00:15:56.826 "raid_level": "raid1", 00:15:56.826 "superblock": true, 00:15:56.826 "num_base_bdevs": 4, 00:15:56.826 "num_base_bdevs_discovered": 4, 00:15:56.826 "num_base_bdevs_operational": 4, 00:15:56.826 "process": { 00:15:56.826 "type": "rebuild", 00:15:56.826 "target": "spare", 00:15:56.826 "progress": { 00:15:56.826 "blocks": 10240, 00:15:56.826 "percent": 16 00:15:56.826 } 00:15:56.826 }, 00:15:56.826 "base_bdevs_list": [ 00:15:56.826 { 00:15:56.826 "name": "spare", 00:15:56.826 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:15:56.826 "is_configured": true, 00:15:56.826 "data_offset": 2048, 00:15:56.826 "data_size": 63488 00:15:56.826 }, 00:15:56.826 { 00:15:56.826 "name": "BaseBdev2", 00:15:56.826 "uuid": "e58f4b28-71a7-5250-bde8-03dff8f502c1", 00:15:56.826 "is_configured": true, 00:15:56.826 "data_offset": 2048, 00:15:56.826 "data_size": 63488 00:15:56.826 }, 00:15:56.826 { 00:15:56.826 "name": "BaseBdev3", 00:15:56.826 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:56.826 "is_configured": true, 00:15:56.826 "data_offset": 2048, 00:15:56.826 "data_size": 63488 00:15:56.826 }, 00:15:56.826 { 00:15:56.826 "name": "BaseBdev4", 00:15:56.826 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:56.826 "is_configured": true, 00:15:56.826 "data_offset": 2048, 00:15:56.826 "data_size": 63488 00:15:56.826 } 00:15:56.826 ] 00:15:56.826 }' 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.826 14:32:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.826 [2024-11-20 14:32:57.854698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.085 [2024-11-20 14:32:57.965609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:57.085 [2024-11-20 14:32:58.069167] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.085 [2024-11-20 14:32:58.091209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.085 [2024-11-20 14:32:58.091498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.085 [2024-11-20 14:32:58.091562] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.085 [2024-11-20 14:32:58.132228] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.344 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.344 "name": "raid_bdev1", 00:15:57.344 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:57.344 "strip_size_kb": 0, 00:15:57.344 "state": "online", 00:15:57.344 "raid_level": "raid1", 00:15:57.344 "superblock": true, 00:15:57.344 "num_base_bdevs": 4, 00:15:57.344 "num_base_bdevs_discovered": 3, 00:15:57.344 "num_base_bdevs_operational": 3, 00:15:57.344 "base_bdevs_list": [ 00:15:57.344 { 00:15:57.344 "name": null, 00:15:57.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.344 "is_configured": false, 00:15:57.344 "data_offset": 0, 00:15:57.344 "data_size": 63488 00:15:57.344 }, 00:15:57.344 { 00:15:57.344 "name": "BaseBdev2", 00:15:57.344 "uuid": "e58f4b28-71a7-5250-bde8-03dff8f502c1", 00:15:57.344 "is_configured": true, 00:15:57.344 "data_offset": 2048, 00:15:57.344 "data_size": 63488 00:15:57.344 }, 00:15:57.344 { 00:15:57.344 "name": "BaseBdev3", 00:15:57.344 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:57.344 "is_configured": true, 00:15:57.344 "data_offset": 2048, 00:15:57.344 "data_size": 63488 00:15:57.344 }, 00:15:57.344 { 00:15:57.344 "name": "BaseBdev4", 00:15:57.344 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:57.345 "is_configured": true, 00:15:57.345 "data_offset": 2048, 00:15:57.345 "data_size": 63488 00:15:57.345 } 00:15:57.345 ] 00:15:57.345 }' 00:15:57.345 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.345 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.911 100.00 IOPS, 300.00 MiB/s [2024-11-20T14:32:58.968Z] 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.911 "name": "raid_bdev1", 00:15:57.911 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:57.911 "strip_size_kb": 0, 00:15:57.911 "state": "online", 00:15:57.911 "raid_level": "raid1", 00:15:57.911 "superblock": true, 00:15:57.911 "num_base_bdevs": 4, 00:15:57.911 "num_base_bdevs_discovered": 3, 00:15:57.911 "num_base_bdevs_operational": 3, 00:15:57.911 "base_bdevs_list": [ 00:15:57.911 { 00:15:57.911 "name": null, 00:15:57.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.911 "is_configured": false, 00:15:57.911 "data_offset": 0, 00:15:57.911 "data_size": 63488 00:15:57.911 }, 00:15:57.911 { 00:15:57.911 "name": "BaseBdev2", 00:15:57.911 "uuid": "e58f4b28-71a7-5250-bde8-03dff8f502c1", 00:15:57.911 "is_configured": true, 00:15:57.911 "data_offset": 2048, 00:15:57.911 "data_size": 63488 00:15:57.911 }, 00:15:57.911 { 00:15:57.911 "name": "BaseBdev3", 00:15:57.911 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:57.911 "is_configured": true, 00:15:57.911 "data_offset": 2048, 00:15:57.911 "data_size": 63488 00:15:57.911 }, 00:15:57.911 { 00:15:57.911 "name": "BaseBdev4", 00:15:57.911 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:57.911 "is_configured": true, 00:15:57.911 "data_offset": 2048, 00:15:57.911 "data_size": 63488 00:15:57.911 } 00:15:57.911 ] 00:15:57.911 }' 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.911 [2024-11-20 14:32:58.834736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.911 14:32:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:57.911 [2024-11-20 14:32:58.932597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:57.911 [2024-11-20 14:32:58.935357] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.170 [2024-11-20 14:32:59.078329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:58.427 117.00 IOPS, 351.00 MiB/s [2024-11-20T14:32:59.484Z] [2024-11-20 14:32:59.314068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:58.427 [2024-11-20 14:32:59.315032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:58.993 [2024-11-20 14:32:59.806649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.993 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.993 "name": "raid_bdev1", 00:15:58.993 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:58.993 "strip_size_kb": 0, 00:15:58.993 "state": "online", 00:15:58.993 "raid_level": "raid1", 00:15:58.993 "superblock": true, 00:15:58.993 "num_base_bdevs": 4, 00:15:58.993 "num_base_bdevs_discovered": 4, 00:15:58.993 "num_base_bdevs_operational": 4, 00:15:58.993 "process": { 00:15:58.993 "type": "rebuild", 00:15:58.993 "target": "spare", 00:15:58.993 "progress": { 00:15:58.993 "blocks": 10240, 00:15:58.993 "percent": 16 00:15:58.993 } 00:15:58.993 }, 00:15:58.993 "base_bdevs_list": [ 00:15:58.993 { 00:15:58.993 "name": "spare", 00:15:58.993 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:15:58.993 "is_configured": true, 00:15:58.993 "data_offset": 2048, 00:15:58.993 "data_size": 63488 00:15:58.993 }, 00:15:58.993 { 00:15:58.993 "name": "BaseBdev2", 00:15:58.993 "uuid": "e58f4b28-71a7-5250-bde8-03dff8f502c1", 00:15:58.993 "is_configured": true, 00:15:58.993 "data_offset": 2048, 00:15:58.993 "data_size": 63488 00:15:58.993 }, 00:15:58.993 { 00:15:58.993 "name": "BaseBdev3", 00:15:58.993 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:58.993 "is_configured": true, 00:15:58.993 "data_offset": 2048, 00:15:58.993 "data_size": 63488 00:15:58.994 }, 00:15:58.994 { 00:15:58.994 "name": "BaseBdev4", 00:15:58.994 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:58.994 "is_configured": true, 00:15:58.994 "data_offset": 2048, 00:15:58.994 "data_size": 63488 00:15:58.994 } 00:15:58.994 ] 00:15:58.994 }' 00:15:58.994 14:32:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.994 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.994 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.994 [2024-11-20 14:33:00.032023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:58.994 [2024-11-20 14:33:00.033859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:59.252 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.252 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.252 [2024-11-20 14:33:00.061254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.252 104.25 IOPS, 312.75 MiB/s [2024-11-20T14:33:00.309Z] [2024-11-20 14:33:00.248954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:59.252 [2024-11-20 14:33:00.249399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:59.511 [2024-11-20 14:33:00.351248] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:59.511 [2024-11-20 14:33:00.351337] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.511 "name": "raid_bdev1", 00:15:59.511 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:59.511 "strip_size_kb": 0, 00:15:59.511 "state": "online", 00:15:59.511 "raid_level": "raid1", 00:15:59.511 "superblock": true, 00:15:59.511 "num_base_bdevs": 4, 00:15:59.511 "num_base_bdevs_discovered": 3, 00:15:59.511 "num_base_bdevs_operational": 3, 00:15:59.511 "process": { 00:15:59.511 "type": "rebuild", 00:15:59.511 "target": "spare", 00:15:59.511 "progress": { 00:15:59.511 "blocks": 16384, 00:15:59.511 "percent": 25 00:15:59.511 } 00:15:59.511 }, 00:15:59.511 "base_bdevs_list": [ 00:15:59.511 { 00:15:59.511 "name": "spare", 00:15:59.511 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:15:59.511 "is_configured": true, 00:15:59.511 "data_offset": 2048, 00:15:59.511 "data_size": 63488 00:15:59.511 }, 00:15:59.511 { 00:15:59.511 "name": null, 00:15:59.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.511 "is_configured": false, 00:15:59.511 "data_offset": 0, 00:15:59.511 "data_size": 63488 00:15:59.511 }, 00:15:59.511 { 00:15:59.511 "name": "BaseBdev3", 00:15:59.511 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:59.511 "is_configured": true, 00:15:59.511 "data_offset": 2048, 00:15:59.511 "data_size": 63488 00:15:59.511 }, 00:15:59.511 { 00:15:59.511 "name": "BaseBdev4", 00:15:59.511 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:59.511 "is_configured": true, 00:15:59.511 "data_offset": 2048, 00:15:59.511 "data_size": 63488 00:15:59.511 } 00:15:59.511 ] 00:15:59.511 }' 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=542 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.511 "name": "raid_bdev1", 00:15:59.511 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:15:59.511 "strip_size_kb": 0, 00:15:59.511 "state": "online", 00:15:59.511 "raid_level": "raid1", 00:15:59.511 "superblock": true, 00:15:59.511 "num_base_bdevs": 4, 00:15:59.511 "num_base_bdevs_discovered": 3, 00:15:59.511 "num_base_bdevs_operational": 3, 00:15:59.511 "process": { 00:15:59.511 "type": "rebuild", 00:15:59.511 "target": "spare", 00:15:59.511 "progress": { 00:15:59.511 "blocks": 18432, 00:15:59.511 "percent": 29 00:15:59.511 } 00:15:59.511 }, 00:15:59.511 "base_bdevs_list": [ 00:15:59.511 { 00:15:59.511 "name": "spare", 00:15:59.511 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:15:59.511 "is_configured": true, 00:15:59.511 "data_offset": 2048, 00:15:59.511 "data_size": 63488 00:15:59.511 }, 00:15:59.511 { 00:15:59.511 "name": null, 00:15:59.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.511 "is_configured": false, 00:15:59.511 "data_offset": 0, 00:15:59.511 "data_size": 63488 00:15:59.511 }, 00:15:59.511 { 00:15:59.511 "name": "BaseBdev3", 00:15:59.511 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:15:59.511 "is_configured": true, 00:15:59.511 "data_offset": 2048, 00:15:59.511 "data_size": 63488 00:15:59.511 }, 00:15:59.511 { 00:15:59.511 "name": "BaseBdev4", 00:15:59.511 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:15:59.511 "is_configured": true, 00:15:59.511 "data_offset": 2048, 00:15:59.511 "data_size": 63488 00:15:59.511 } 00:15:59.511 ] 00:15:59.511 }' 00:15:59.511 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.770 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.770 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.770 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.770 14:33:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.096 [2024-11-20 14:33:01.040183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:00.613 98.40 IOPS, 295.20 MiB/s [2024-11-20T14:33:01.670Z] [2024-11-20 14:33:01.506901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:00.613 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.613 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.613 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.613 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.613 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.613 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.871 "name": "raid_bdev1", 00:16:00.871 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:00.871 "strip_size_kb": 0, 00:16:00.871 "state": "online", 00:16:00.871 "raid_level": "raid1", 00:16:00.871 "superblock": true, 00:16:00.871 "num_base_bdevs": 4, 00:16:00.871 "num_base_bdevs_discovered": 3, 00:16:00.871 "num_base_bdevs_operational": 3, 00:16:00.871 "process": { 00:16:00.871 "type": "rebuild", 00:16:00.871 "target": "spare", 00:16:00.871 "progress": { 00:16:00.871 "blocks": 34816, 00:16:00.871 "percent": 54 00:16:00.871 } 00:16:00.871 }, 00:16:00.871 "base_bdevs_list": [ 00:16:00.871 { 00:16:00.871 "name": "spare", 00:16:00.871 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:00.871 "is_configured": true, 00:16:00.871 "data_offset": 2048, 00:16:00.871 "data_size": 63488 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "name": null, 00:16:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.871 "is_configured": false, 00:16:00.871 "data_offset": 0, 00:16:00.871 "data_size": 63488 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "name": "BaseBdev3", 00:16:00.871 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:00.871 "is_configured": true, 00:16:00.871 "data_offset": 2048, 00:16:00.871 "data_size": 63488 00:16:00.871 }, 00:16:00.871 { 00:16:00.871 "name": "BaseBdev4", 00:16:00.871 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:00.871 "is_configured": true, 00:16:00.871 "data_offset": 2048, 00:16:00.871 "data_size": 63488 00:16:00.871 } 00:16:00.871 ] 00:16:00.871 }' 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.871 14:33:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.871 [2024-11-20 14:33:01.867543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:00.871 [2024-11-20 14:33:01.868752] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:01.436 92.17 IOPS, 276.50 MiB/s [2024-11-20T14:33:02.494Z] [2024-11-20 14:33:02.328417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:01.437 [2024-11-20 14:33:02.329659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:01.695 [2024-11-20 14:33:02.736007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.954 "name": "raid_bdev1", 00:16:01.954 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:01.954 "strip_size_kb": 0, 00:16:01.954 "state": "online", 00:16:01.954 "raid_level": "raid1", 00:16:01.954 "superblock": true, 00:16:01.954 "num_base_bdevs": 4, 00:16:01.954 "num_base_bdevs_discovered": 3, 00:16:01.954 "num_base_bdevs_operational": 3, 00:16:01.954 "process": { 00:16:01.954 "type": "rebuild", 00:16:01.954 "target": "spare", 00:16:01.954 "progress": { 00:16:01.954 "blocks": 51200, 00:16:01.954 "percent": 80 00:16:01.954 } 00:16:01.954 }, 00:16:01.954 "base_bdevs_list": [ 00:16:01.954 { 00:16:01.954 "name": "spare", 00:16:01.954 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:01.954 "is_configured": true, 00:16:01.954 "data_offset": 2048, 00:16:01.954 "data_size": 63488 00:16:01.954 }, 00:16:01.954 { 00:16:01.954 "name": null, 00:16:01.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.954 "is_configured": false, 00:16:01.954 "data_offset": 0, 00:16:01.954 "data_size": 63488 00:16:01.954 }, 00:16:01.954 { 00:16:01.954 "name": "BaseBdev3", 00:16:01.954 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:01.954 "is_configured": true, 00:16:01.954 "data_offset": 2048, 00:16:01.954 "data_size": 63488 00:16:01.954 }, 00:16:01.954 { 00:16:01.954 "name": "BaseBdev4", 00:16:01.954 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:01.954 "is_configured": true, 00:16:01.954 "data_offset": 2048, 00:16:01.954 "data_size": 63488 00:16:01.954 } 00:16:01.954 ] 00:16:01.954 }' 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.954 14:33:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.471 83.29 IOPS, 249.86 MiB/s [2024-11-20T14:33:03.528Z] [2024-11-20 14:33:03.289469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:02.471 [2024-11-20 14:33:03.289885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:02.729 [2024-11-20 14:33:03.627124] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:02.729 [2024-11-20 14:33:03.742962] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:02.729 [2024-11-20 14:33:03.746757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.987 14:33:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.987 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.246 "name": "raid_bdev1", 00:16:03.246 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:03.246 "strip_size_kb": 0, 00:16:03.246 "state": "online", 00:16:03.246 "raid_level": "raid1", 00:16:03.246 "superblock": true, 00:16:03.246 "num_base_bdevs": 4, 00:16:03.246 "num_base_bdevs_discovered": 3, 00:16:03.246 "num_base_bdevs_operational": 3, 00:16:03.246 "base_bdevs_list": [ 00:16:03.246 { 00:16:03.246 "name": "spare", 00:16:03.246 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:03.246 "is_configured": true, 00:16:03.246 "data_offset": 2048, 00:16:03.246 "data_size": 63488 00:16:03.246 }, 00:16:03.246 { 00:16:03.246 "name": null, 00:16:03.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.246 "is_configured": false, 00:16:03.246 "data_offset": 0, 00:16:03.246 "data_size": 63488 00:16:03.246 }, 00:16:03.246 { 00:16:03.246 "name": "BaseBdev3", 00:16:03.246 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:03.246 "is_configured": true, 00:16:03.246 "data_offset": 2048, 00:16:03.246 "data_size": 63488 00:16:03.246 }, 00:16:03.246 { 00:16:03.246 "name": "BaseBdev4", 00:16:03.246 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:03.246 "is_configured": true, 00:16:03.246 "data_offset": 2048, 00:16:03.246 "data_size": 63488 00:16:03.246 } 00:16:03.246 ] 00:16:03.246 }' 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.246 "name": "raid_bdev1", 00:16:03.246 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:03.246 "strip_size_kb": 0, 00:16:03.246 "state": "online", 00:16:03.246 "raid_level": "raid1", 00:16:03.246 "superblock": true, 00:16:03.246 "num_base_bdevs": 4, 00:16:03.246 "num_base_bdevs_discovered": 3, 00:16:03.246 "num_base_bdevs_operational": 3, 00:16:03.246 "base_bdevs_list": [ 00:16:03.246 { 00:16:03.246 "name": "spare", 00:16:03.246 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:03.246 "is_configured": true, 00:16:03.246 "data_offset": 2048, 00:16:03.246 "data_size": 63488 00:16:03.246 }, 00:16:03.246 { 00:16:03.246 "name": null, 00:16:03.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.246 "is_configured": false, 00:16:03.246 "data_offset": 0, 00:16:03.246 "data_size": 63488 00:16:03.246 }, 00:16:03.246 { 00:16:03.246 "name": "BaseBdev3", 00:16:03.246 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:03.246 "is_configured": true, 00:16:03.246 "data_offset": 2048, 00:16:03.246 "data_size": 63488 00:16:03.246 }, 00:16:03.246 { 00:16:03.246 "name": "BaseBdev4", 00:16:03.246 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:03.246 "is_configured": true, 00:16:03.246 "data_offset": 2048, 00:16:03.246 "data_size": 63488 00:16:03.246 } 00:16:03.246 ] 00:16:03.246 }' 00:16:03.246 76.75 IOPS, 230.25 MiB/s [2024-11-20T14:33:04.303Z] 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.246 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.505 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.505 "name": "raid_bdev1", 00:16:03.505 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:03.505 "strip_size_kb": 0, 00:16:03.505 "state": "online", 00:16:03.505 "raid_level": "raid1", 00:16:03.505 "superblock": true, 00:16:03.505 "num_base_bdevs": 4, 00:16:03.505 "num_base_bdevs_discovered": 3, 00:16:03.505 "num_base_bdevs_operational": 3, 00:16:03.505 "base_bdevs_list": [ 00:16:03.505 { 00:16:03.505 "name": "spare", 00:16:03.505 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:03.505 "is_configured": true, 00:16:03.505 "data_offset": 2048, 00:16:03.505 "data_size": 63488 00:16:03.505 }, 00:16:03.505 { 00:16:03.505 "name": null, 00:16:03.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.505 "is_configured": false, 00:16:03.505 "data_offset": 0, 00:16:03.505 "data_size": 63488 00:16:03.506 }, 00:16:03.506 { 00:16:03.506 "name": "BaseBdev3", 00:16:03.506 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:03.506 "is_configured": true, 00:16:03.506 "data_offset": 2048, 00:16:03.506 "data_size": 63488 00:16:03.506 }, 00:16:03.506 { 00:16:03.506 "name": "BaseBdev4", 00:16:03.506 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:03.506 "is_configured": true, 00:16:03.506 "data_offset": 2048, 00:16:03.506 "data_size": 63488 00:16:03.506 } 00:16:03.506 ] 00:16:03.506 }' 00:16:03.506 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.506 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.156 [2024-11-20 14:33:04.867214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.156 [2024-11-20 14:33:04.867288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.156 00:16:04.156 Latency(us) 00:16:04.156 [2024-11-20T14:33:05.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.156 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:04.156 raid_bdev1 : 8.75 72.66 217.99 0.00 0.00 17697.76 301.61 122969.37 00:16:04.156 [2024-11-20T14:33:05.213Z] =================================================================================================================== 00:16:04.156 [2024-11-20T14:33:05.213Z] Total : 72.66 217.99 0.00 0.00 17697.76 301.61 122969.37 00:16:04.156 [2024-11-20 14:33:04.987172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.156 [2024-11-20 14:33:04.987262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.156 [2024-11-20 14:33:04.987405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.156 [2024-11-20 14:33:04.987429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:04.156 { 00:16:04.156 "results": [ 00:16:04.156 { 00:16:04.156 "job": "raid_bdev1", 00:16:04.156 "core_mask": "0x1", 00:16:04.156 "workload": "randrw", 00:16:04.156 "percentage": 50, 00:16:04.156 "status": "finished", 00:16:04.156 "queue_depth": 2, 00:16:04.156 "io_size": 3145728, 00:16:04.156 "runtime": 8.752695, 00:16:04.156 "iops": 72.6633339788488, 00:16:04.156 "mibps": 217.99000193654638, 00:16:04.156 "io_failed": 0, 00:16:04.156 "io_timeout": 0, 00:16:04.156 "avg_latency_us": 17697.755700400226, 00:16:04.156 "min_latency_us": 301.61454545454546, 00:16:04.156 "max_latency_us": 122969.36727272728 00:16:04.156 } 00:16:04.156 ], 00:16:04.156 "core_count": 1 00:16:04.156 } 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:04.156 14:33:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.156 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:04.414 /dev/nbd0 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.414 1+0 records in 00:16:04.414 1+0 records out 00:16:04.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365953 s, 11.2 MB/s 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.414 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:04.672 /dev/nbd1 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.931 1+0 records in 00:16:04.931 1+0 records out 00:16:04.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375243 s, 10.9 MB/s 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.931 14:33:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.189 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:05.756 /dev/nbd1 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.756 1+0 records in 00:16:05.756 1+0 records out 00:16:05.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444557 s, 9.2 MB/s 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.756 14:33:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.014 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.272 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.272 [2024-11-20 14:33:07.322329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.272 [2024-11-20 14:33:07.322405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.272 [2024-11-20 14:33:07.322438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:06.272 [2024-11-20 14:33:07.322456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.272 [2024-11-20 14:33:07.325449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.272 [2024-11-20 14:33:07.325502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.272 [2024-11-20 14:33:07.325622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.272 [2024-11-20 14:33:07.325728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.272 [2024-11-20 14:33:07.325909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.273 [2024-11-20 14:33:07.326102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.531 spare 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.531 [2024-11-20 14:33:07.426281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:06.531 [2024-11-20 14:33:07.426370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:06.531 [2024-11-20 14:33:07.426889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:06.531 [2024-11-20 14:33:07.427178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:06.531 [2024-11-20 14:33:07.427206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:06.531 [2024-11-20 14:33:07.427485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.531 "name": "raid_bdev1", 00:16:06.531 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:06.531 "strip_size_kb": 0, 00:16:06.531 "state": "online", 00:16:06.531 "raid_level": "raid1", 00:16:06.531 "superblock": true, 00:16:06.531 "num_base_bdevs": 4, 00:16:06.531 "num_base_bdevs_discovered": 3, 00:16:06.531 "num_base_bdevs_operational": 3, 00:16:06.531 "base_bdevs_list": [ 00:16:06.531 { 00:16:06.531 "name": "spare", 00:16:06.531 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:06.531 "is_configured": true, 00:16:06.531 "data_offset": 2048, 00:16:06.531 "data_size": 63488 00:16:06.531 }, 00:16:06.531 { 00:16:06.531 "name": null, 00:16:06.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.531 "is_configured": false, 00:16:06.531 "data_offset": 2048, 00:16:06.531 "data_size": 63488 00:16:06.531 }, 00:16:06.531 { 00:16:06.531 "name": "BaseBdev3", 00:16:06.531 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:06.531 "is_configured": true, 00:16:06.531 "data_offset": 2048, 00:16:06.531 "data_size": 63488 00:16:06.531 }, 00:16:06.531 { 00:16:06.531 "name": "BaseBdev4", 00:16:06.531 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:06.531 "is_configured": true, 00:16:06.531 "data_offset": 2048, 00:16:06.531 "data_size": 63488 00:16:06.531 } 00:16:06.531 ] 00:16:06.531 }' 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.531 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.097 "name": "raid_bdev1", 00:16:07.097 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:07.097 "strip_size_kb": 0, 00:16:07.097 "state": "online", 00:16:07.097 "raid_level": "raid1", 00:16:07.097 "superblock": true, 00:16:07.097 "num_base_bdevs": 4, 00:16:07.097 "num_base_bdevs_discovered": 3, 00:16:07.097 "num_base_bdevs_operational": 3, 00:16:07.097 "base_bdevs_list": [ 00:16:07.097 { 00:16:07.097 "name": "spare", 00:16:07.097 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:07.097 "is_configured": true, 00:16:07.097 "data_offset": 2048, 00:16:07.097 "data_size": 63488 00:16:07.097 }, 00:16:07.097 { 00:16:07.097 "name": null, 00:16:07.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.097 "is_configured": false, 00:16:07.097 "data_offset": 2048, 00:16:07.097 "data_size": 63488 00:16:07.097 }, 00:16:07.097 { 00:16:07.097 "name": "BaseBdev3", 00:16:07.097 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:07.097 "is_configured": true, 00:16:07.097 "data_offset": 2048, 00:16:07.097 "data_size": 63488 00:16:07.097 }, 00:16:07.097 { 00:16:07.097 "name": "BaseBdev4", 00:16:07.097 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:07.097 "is_configured": true, 00:16:07.097 "data_offset": 2048, 00:16:07.097 "data_size": 63488 00:16:07.097 } 00:16:07.097 ] 00:16:07.097 }' 00:16:07.097 14:33:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.097 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.355 [2024-11-20 14:33:08.179766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.355 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.355 "name": "raid_bdev1", 00:16:07.355 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:07.355 "strip_size_kb": 0, 00:16:07.355 "state": "online", 00:16:07.355 "raid_level": "raid1", 00:16:07.355 "superblock": true, 00:16:07.355 "num_base_bdevs": 4, 00:16:07.355 "num_base_bdevs_discovered": 2, 00:16:07.355 "num_base_bdevs_operational": 2, 00:16:07.355 "base_bdevs_list": [ 00:16:07.355 { 00:16:07.355 "name": null, 00:16:07.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.355 "is_configured": false, 00:16:07.355 "data_offset": 0, 00:16:07.355 "data_size": 63488 00:16:07.355 }, 00:16:07.355 { 00:16:07.355 "name": null, 00:16:07.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.355 "is_configured": false, 00:16:07.355 "data_offset": 2048, 00:16:07.355 "data_size": 63488 00:16:07.355 }, 00:16:07.355 { 00:16:07.355 "name": "BaseBdev3", 00:16:07.355 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:07.355 "is_configured": true, 00:16:07.355 "data_offset": 2048, 00:16:07.355 "data_size": 63488 00:16:07.355 }, 00:16:07.355 { 00:16:07.355 "name": "BaseBdev4", 00:16:07.355 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:07.355 "is_configured": true, 00:16:07.355 "data_offset": 2048, 00:16:07.355 "data_size": 63488 00:16:07.355 } 00:16:07.356 ] 00:16:07.356 }' 00:16:07.356 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.356 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.920 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.920 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.920 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.920 [2024-11-20 14:33:08.716040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.920 [2024-11-20 14:33:08.716307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:07.920 [2024-11-20 14:33:08.716336] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.920 [2024-11-20 14:33:08.716387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.920 [2024-11-20 14:33:08.730423] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:07.920 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.920 14:33:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:07.920 [2024-11-20 14:33:08.733165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.855 "name": "raid_bdev1", 00:16:08.855 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:08.855 "strip_size_kb": 0, 00:16:08.855 "state": "online", 00:16:08.855 "raid_level": "raid1", 00:16:08.855 "superblock": true, 00:16:08.855 "num_base_bdevs": 4, 00:16:08.855 "num_base_bdevs_discovered": 3, 00:16:08.855 "num_base_bdevs_operational": 3, 00:16:08.855 "process": { 00:16:08.855 "type": "rebuild", 00:16:08.855 "target": "spare", 00:16:08.855 "progress": { 00:16:08.855 "blocks": 20480, 00:16:08.855 "percent": 32 00:16:08.855 } 00:16:08.855 }, 00:16:08.855 "base_bdevs_list": [ 00:16:08.855 { 00:16:08.855 "name": "spare", 00:16:08.855 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:08.855 "is_configured": true, 00:16:08.855 "data_offset": 2048, 00:16:08.855 "data_size": 63488 00:16:08.855 }, 00:16:08.855 { 00:16:08.855 "name": null, 00:16:08.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.855 "is_configured": false, 00:16:08.855 "data_offset": 2048, 00:16:08.855 "data_size": 63488 00:16:08.855 }, 00:16:08.855 { 00:16:08.855 "name": "BaseBdev3", 00:16:08.855 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:08.855 "is_configured": true, 00:16:08.855 "data_offset": 2048, 00:16:08.855 "data_size": 63488 00:16:08.855 }, 00:16:08.855 { 00:16:08.855 "name": "BaseBdev4", 00:16:08.855 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:08.855 "is_configured": true, 00:16:08.855 "data_offset": 2048, 00:16:08.855 "data_size": 63488 00:16:08.855 } 00:16:08.855 ] 00:16:08.855 }' 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.855 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.114 [2024-11-20 14:33:09.911076] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.114 [2024-11-20 14:33:09.942967] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.114 [2024-11-20 14:33:09.943078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.114 [2024-11-20 14:33:09.943106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.114 [2024-11-20 14:33:09.943128] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.114 14:33:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.114 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.114 "name": "raid_bdev1", 00:16:09.114 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:09.114 "strip_size_kb": 0, 00:16:09.114 "state": "online", 00:16:09.114 "raid_level": "raid1", 00:16:09.114 "superblock": true, 00:16:09.114 "num_base_bdevs": 4, 00:16:09.114 "num_base_bdevs_discovered": 2, 00:16:09.114 "num_base_bdevs_operational": 2, 00:16:09.114 "base_bdevs_list": [ 00:16:09.114 { 00:16:09.114 "name": null, 00:16:09.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.114 "is_configured": false, 00:16:09.114 "data_offset": 0, 00:16:09.114 "data_size": 63488 00:16:09.114 }, 00:16:09.114 { 00:16:09.114 "name": null, 00:16:09.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.114 "is_configured": false, 00:16:09.114 "data_offset": 2048, 00:16:09.114 "data_size": 63488 00:16:09.114 }, 00:16:09.114 { 00:16:09.114 "name": "BaseBdev3", 00:16:09.114 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:09.114 "is_configured": true, 00:16:09.114 "data_offset": 2048, 00:16:09.114 "data_size": 63488 00:16:09.114 }, 00:16:09.114 { 00:16:09.114 "name": "BaseBdev4", 00:16:09.114 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:09.114 "is_configured": true, 00:16:09.114 "data_offset": 2048, 00:16:09.114 "data_size": 63488 00:16:09.114 } 00:16:09.114 ] 00:16:09.114 }' 00:16:09.114 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.114 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.680 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.680 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.680 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.680 [2024-11-20 14:33:10.482696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.680 [2024-11-20 14:33:10.482803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.680 [2024-11-20 14:33:10.482849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:09.680 [2024-11-20 14:33:10.482868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.680 [2024-11-20 14:33:10.483517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.680 [2024-11-20 14:33:10.483567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.680 [2024-11-20 14:33:10.483742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.680 [2024-11-20 14:33:10.483776] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:09.680 [2024-11-20 14:33:10.483792] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.680 [2024-11-20 14:33:10.483834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.680 [2024-11-20 14:33:10.498251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:09.680 spare 00:16:09.680 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.680 14:33:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:09.680 [2024-11-20 14:33:10.500862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.658 "name": "raid_bdev1", 00:16:10.658 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:10.658 "strip_size_kb": 0, 00:16:10.658 "state": "online", 00:16:10.658 "raid_level": "raid1", 00:16:10.658 "superblock": true, 00:16:10.658 "num_base_bdevs": 4, 00:16:10.658 "num_base_bdevs_discovered": 3, 00:16:10.658 "num_base_bdevs_operational": 3, 00:16:10.658 "process": { 00:16:10.658 "type": "rebuild", 00:16:10.658 "target": "spare", 00:16:10.658 "progress": { 00:16:10.658 "blocks": 20480, 00:16:10.658 "percent": 32 00:16:10.658 } 00:16:10.658 }, 00:16:10.658 "base_bdevs_list": [ 00:16:10.658 { 00:16:10.658 "name": "spare", 00:16:10.658 "uuid": "2874a14c-45a8-5730-8c76-b3fb61f3e747", 00:16:10.658 "is_configured": true, 00:16:10.658 "data_offset": 2048, 00:16:10.658 "data_size": 63488 00:16:10.658 }, 00:16:10.658 { 00:16:10.658 "name": null, 00:16:10.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.658 "is_configured": false, 00:16:10.658 "data_offset": 2048, 00:16:10.658 "data_size": 63488 00:16:10.658 }, 00:16:10.658 { 00:16:10.658 "name": "BaseBdev3", 00:16:10.658 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:10.658 "is_configured": true, 00:16:10.658 "data_offset": 2048, 00:16:10.658 "data_size": 63488 00:16:10.658 }, 00:16:10.658 { 00:16:10.658 "name": "BaseBdev4", 00:16:10.658 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:10.658 "is_configured": true, 00:16:10.658 "data_offset": 2048, 00:16:10.658 "data_size": 63488 00:16:10.658 } 00:16:10.658 ] 00:16:10.658 }' 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.658 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.658 [2024-11-20 14:33:11.666479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.658 [2024-11-20 14:33:11.710388] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.658 [2024-11-20 14:33:11.710485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.658 [2024-11-20 14:33:11.710520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.658 [2024-11-20 14:33:11.710533] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.916 "name": "raid_bdev1", 00:16:10.916 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:10.916 "strip_size_kb": 0, 00:16:10.916 "state": "online", 00:16:10.916 "raid_level": "raid1", 00:16:10.916 "superblock": true, 00:16:10.916 "num_base_bdevs": 4, 00:16:10.916 "num_base_bdevs_discovered": 2, 00:16:10.916 "num_base_bdevs_operational": 2, 00:16:10.916 "base_bdevs_list": [ 00:16:10.916 { 00:16:10.916 "name": null, 00:16:10.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.916 "is_configured": false, 00:16:10.916 "data_offset": 0, 00:16:10.916 "data_size": 63488 00:16:10.916 }, 00:16:10.916 { 00:16:10.916 "name": null, 00:16:10.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.916 "is_configured": false, 00:16:10.916 "data_offset": 2048, 00:16:10.916 "data_size": 63488 00:16:10.916 }, 00:16:10.916 { 00:16:10.916 "name": "BaseBdev3", 00:16:10.916 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:10.916 "is_configured": true, 00:16:10.916 "data_offset": 2048, 00:16:10.916 "data_size": 63488 00:16:10.916 }, 00:16:10.916 { 00:16:10.916 "name": "BaseBdev4", 00:16:10.916 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:10.916 "is_configured": true, 00:16:10.916 "data_offset": 2048, 00:16:10.916 "data_size": 63488 00:16:10.916 } 00:16:10.916 ] 00:16:10.916 }' 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.916 14:33:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.482 "name": "raid_bdev1", 00:16:11.482 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:11.482 "strip_size_kb": 0, 00:16:11.482 "state": "online", 00:16:11.482 "raid_level": "raid1", 00:16:11.482 "superblock": true, 00:16:11.482 "num_base_bdevs": 4, 00:16:11.482 "num_base_bdevs_discovered": 2, 00:16:11.482 "num_base_bdevs_operational": 2, 00:16:11.482 "base_bdevs_list": [ 00:16:11.482 { 00:16:11.482 "name": null, 00:16:11.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.482 "is_configured": false, 00:16:11.482 "data_offset": 0, 00:16:11.482 "data_size": 63488 00:16:11.482 }, 00:16:11.482 { 00:16:11.482 "name": null, 00:16:11.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.482 "is_configured": false, 00:16:11.482 "data_offset": 2048, 00:16:11.482 "data_size": 63488 00:16:11.482 }, 00:16:11.482 { 00:16:11.482 "name": "BaseBdev3", 00:16:11.482 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:11.482 "is_configured": true, 00:16:11.482 "data_offset": 2048, 00:16:11.482 "data_size": 63488 00:16:11.482 }, 00:16:11.482 { 00:16:11.482 "name": "BaseBdev4", 00:16:11.482 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:11.482 "is_configured": true, 00:16:11.482 "data_offset": 2048, 00:16:11.482 "data_size": 63488 00:16:11.482 } 00:16:11.482 ] 00:16:11.482 }' 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.482 [2024-11-20 14:33:12.441692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.482 [2024-11-20 14:33:12.441775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.482 [2024-11-20 14:33:12.441812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:11.482 [2024-11-20 14:33:12.441827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.482 [2024-11-20 14:33:12.442465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.482 [2024-11-20 14:33:12.442510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.482 [2024-11-20 14:33:12.442653] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:11.482 [2024-11-20 14:33:12.442678] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:11.482 [2024-11-20 14:33:12.442699] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.482 [2024-11-20 14:33:12.442713] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:11.482 BaseBdev1 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.482 14:33:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.417 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.675 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.675 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.675 "name": "raid_bdev1", 00:16:12.675 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:12.675 "strip_size_kb": 0, 00:16:12.675 "state": "online", 00:16:12.675 "raid_level": "raid1", 00:16:12.675 "superblock": true, 00:16:12.675 "num_base_bdevs": 4, 00:16:12.675 "num_base_bdevs_discovered": 2, 00:16:12.675 "num_base_bdevs_operational": 2, 00:16:12.675 "base_bdevs_list": [ 00:16:12.675 { 00:16:12.675 "name": null, 00:16:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.675 "is_configured": false, 00:16:12.675 "data_offset": 0, 00:16:12.675 "data_size": 63488 00:16:12.675 }, 00:16:12.675 { 00:16:12.675 "name": null, 00:16:12.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.675 "is_configured": false, 00:16:12.675 "data_offset": 2048, 00:16:12.675 "data_size": 63488 00:16:12.675 }, 00:16:12.675 { 00:16:12.675 "name": "BaseBdev3", 00:16:12.675 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:12.675 "is_configured": true, 00:16:12.675 "data_offset": 2048, 00:16:12.675 "data_size": 63488 00:16:12.675 }, 00:16:12.675 { 00:16:12.675 "name": "BaseBdev4", 00:16:12.675 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:12.675 "is_configured": true, 00:16:12.675 "data_offset": 2048, 00:16:12.675 "data_size": 63488 00:16:12.675 } 00:16:12.675 ] 00:16:12.676 }' 00:16:12.676 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.676 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.933 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.191 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.191 "name": "raid_bdev1", 00:16:13.191 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:13.191 "strip_size_kb": 0, 00:16:13.191 "state": "online", 00:16:13.191 "raid_level": "raid1", 00:16:13.191 "superblock": true, 00:16:13.191 "num_base_bdevs": 4, 00:16:13.191 "num_base_bdevs_discovered": 2, 00:16:13.191 "num_base_bdevs_operational": 2, 00:16:13.191 "base_bdevs_list": [ 00:16:13.191 { 00:16:13.191 "name": null, 00:16:13.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.191 "is_configured": false, 00:16:13.191 "data_offset": 0, 00:16:13.191 "data_size": 63488 00:16:13.191 }, 00:16:13.191 { 00:16:13.191 "name": null, 00:16:13.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.191 "is_configured": false, 00:16:13.191 "data_offset": 2048, 00:16:13.191 "data_size": 63488 00:16:13.191 }, 00:16:13.191 { 00:16:13.191 "name": "BaseBdev3", 00:16:13.192 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:13.192 "is_configured": true, 00:16:13.192 "data_offset": 2048, 00:16:13.192 "data_size": 63488 00:16:13.192 }, 00:16:13.192 { 00:16:13.192 "name": "BaseBdev4", 00:16:13.192 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:13.192 "is_configured": true, 00:16:13.192 "data_offset": 2048, 00:16:13.192 "data_size": 63488 00:16:13.192 } 00:16:13.192 ] 00:16:13.192 }' 00:16:13.192 14:33:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.192 [2024-11-20 14:33:14.110571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.192 [2024-11-20 14:33:14.110841] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:13.192 [2024-11-20 14:33:14.110878] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.192 request: 00:16:13.192 { 00:16:13.192 "base_bdev": "BaseBdev1", 00:16:13.192 "raid_bdev": "raid_bdev1", 00:16:13.192 "method": "bdev_raid_add_base_bdev", 00:16:13.192 "req_id": 1 00:16:13.192 } 00:16:13.192 Got JSON-RPC error response 00:16:13.192 response: 00:16:13.192 { 00:16:13.192 "code": -22, 00:16:13.192 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:13.192 } 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.192 14:33:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.125 "name": "raid_bdev1", 00:16:14.125 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:14.125 "strip_size_kb": 0, 00:16:14.125 "state": "online", 00:16:14.125 "raid_level": "raid1", 00:16:14.125 "superblock": true, 00:16:14.125 "num_base_bdevs": 4, 00:16:14.125 "num_base_bdevs_discovered": 2, 00:16:14.125 "num_base_bdevs_operational": 2, 00:16:14.125 "base_bdevs_list": [ 00:16:14.125 { 00:16:14.125 "name": null, 00:16:14.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.125 "is_configured": false, 00:16:14.125 "data_offset": 0, 00:16:14.125 "data_size": 63488 00:16:14.125 }, 00:16:14.125 { 00:16:14.125 "name": null, 00:16:14.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.125 "is_configured": false, 00:16:14.125 "data_offset": 2048, 00:16:14.125 "data_size": 63488 00:16:14.125 }, 00:16:14.125 { 00:16:14.125 "name": "BaseBdev3", 00:16:14.125 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:14.125 "is_configured": true, 00:16:14.125 "data_offset": 2048, 00:16:14.125 "data_size": 63488 00:16:14.125 }, 00:16:14.125 { 00:16:14.125 "name": "BaseBdev4", 00:16:14.125 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:14.125 "is_configured": true, 00:16:14.125 "data_offset": 2048, 00:16:14.125 "data_size": 63488 00:16:14.125 } 00:16:14.125 ] 00:16:14.125 }' 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.125 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.691 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.691 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.691 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.692 "name": "raid_bdev1", 00:16:14.692 "uuid": "3787e4ce-ab4d-4bc6-8694-7b9079be89d8", 00:16:14.692 "strip_size_kb": 0, 00:16:14.692 "state": "online", 00:16:14.692 "raid_level": "raid1", 00:16:14.692 "superblock": true, 00:16:14.692 "num_base_bdevs": 4, 00:16:14.692 "num_base_bdevs_discovered": 2, 00:16:14.692 "num_base_bdevs_operational": 2, 00:16:14.692 "base_bdevs_list": [ 00:16:14.692 { 00:16:14.692 "name": null, 00:16:14.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.692 "is_configured": false, 00:16:14.692 "data_offset": 0, 00:16:14.692 "data_size": 63488 00:16:14.692 }, 00:16:14.692 { 00:16:14.692 "name": null, 00:16:14.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.692 "is_configured": false, 00:16:14.692 "data_offset": 2048, 00:16:14.692 "data_size": 63488 00:16:14.692 }, 00:16:14.692 { 00:16:14.692 "name": "BaseBdev3", 00:16:14.692 "uuid": "d8313530-f30f-5447-9cea-aa15b834d5dc", 00:16:14.692 "is_configured": true, 00:16:14.692 "data_offset": 2048, 00:16:14.692 "data_size": 63488 00:16:14.692 }, 00:16:14.692 { 00:16:14.692 "name": "BaseBdev4", 00:16:14.692 "uuid": "a25b11f5-b519-52a8-8831-188bbf2fca37", 00:16:14.692 "is_configured": true, 00:16:14.692 "data_offset": 2048, 00:16:14.692 "data_size": 63488 00:16:14.692 } 00:16:14.692 ] 00:16:14.692 }' 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.692 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.950 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.950 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79577 00:16:14.950 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79577 ']' 00:16:14.950 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79577 00:16:14.950 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:14.950 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.950 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79577 00:16:14.950 killing process with pid 79577 00:16:14.950 Received shutdown signal, test time was about 19.604916 seconds 00:16:14.950 00:16:14.950 Latency(us) 00:16:14.950 [2024-11-20T14:33:16.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.950 [2024-11-20T14:33:16.007Z] =================================================================================================================== 00:16:14.950 [2024-11-20T14:33:16.008Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.951 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.951 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.951 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79577' 00:16:14.951 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79577 00:16:14.951 [2024-11-20 14:33:15.819221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.951 14:33:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79577 00:16:14.951 [2024-11-20 14:33:15.819421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.951 [2024-11-20 14:33:15.819518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.951 [2024-11-20 14:33:15.819539] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:15.208 [2024-11-20 14:33:16.199386] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.593 ************************************ 00:16:16.593 END TEST raid_rebuild_test_sb_io 00:16:16.593 ************************************ 00:16:16.593 14:33:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:16.593 00:16:16.593 real 0m23.225s 00:16:16.593 user 0m31.462s 00:16:16.593 sys 0m2.452s 00:16:16.593 14:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.593 14:33:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.593 14:33:17 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:16.593 14:33:17 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:16.593 14:33:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:16.593 14:33:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.593 14:33:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.593 ************************************ 00:16:16.593 START TEST raid5f_state_function_test 00:16:16.593 ************************************ 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80316 00:16:16.593 Process raid pid: 80316 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80316' 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80316 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80316 ']' 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.593 14:33:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.593 [2024-11-20 14:33:17.493803] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:16:16.593 [2024-11-20 14:33:17.494002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.852 [2024-11-20 14:33:17.687361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.852 [2024-11-20 14:33:17.852081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.110 [2024-11-20 14:33:18.089870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.110 [2024-11-20 14:33:18.089951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.676 [2024-11-20 14:33:18.493861] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.676 [2024-11-20 14:33:18.493933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.676 [2024-11-20 14:33:18.493951] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.676 [2024-11-20 14:33:18.493969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.676 [2024-11-20 14:33:18.493979] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.676 [2024-11-20 14:33:18.493993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.676 "name": "Existed_Raid", 00:16:17.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.676 "strip_size_kb": 64, 00:16:17.676 "state": "configuring", 00:16:17.676 "raid_level": "raid5f", 00:16:17.676 "superblock": false, 00:16:17.676 "num_base_bdevs": 3, 00:16:17.676 "num_base_bdevs_discovered": 0, 00:16:17.676 "num_base_bdevs_operational": 3, 00:16:17.676 "base_bdevs_list": [ 00:16:17.676 { 00:16:17.676 "name": "BaseBdev1", 00:16:17.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.676 "is_configured": false, 00:16:17.676 "data_offset": 0, 00:16:17.676 "data_size": 0 00:16:17.676 }, 00:16:17.676 { 00:16:17.676 "name": "BaseBdev2", 00:16:17.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.676 "is_configured": false, 00:16:17.676 "data_offset": 0, 00:16:17.676 "data_size": 0 00:16:17.676 }, 00:16:17.676 { 00:16:17.676 "name": "BaseBdev3", 00:16:17.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.676 "is_configured": false, 00:16:17.676 "data_offset": 0, 00:16:17.676 "data_size": 0 00:16:17.676 } 00:16:17.676 ] 00:16:17.676 }' 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.676 14:33:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.242 [2024-11-20 14:33:19.005952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.242 [2024-11-20 14:33:19.006005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.242 [2024-11-20 14:33:19.013923] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.242 [2024-11-20 14:33:19.013980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.242 [2024-11-20 14:33:19.013995] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.242 [2024-11-20 14:33:19.014011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.242 [2024-11-20 14:33:19.014021] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.242 [2024-11-20 14:33:19.014035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.242 [2024-11-20 14:33:19.059466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.242 BaseBdev1 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.242 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.243 [ 00:16:18.243 { 00:16:18.243 "name": "BaseBdev1", 00:16:18.243 "aliases": [ 00:16:18.243 "03f61570-9fd0-4af2-9b72-4267e896b175" 00:16:18.243 ], 00:16:18.243 "product_name": "Malloc disk", 00:16:18.243 "block_size": 512, 00:16:18.243 "num_blocks": 65536, 00:16:18.243 "uuid": "03f61570-9fd0-4af2-9b72-4267e896b175", 00:16:18.243 "assigned_rate_limits": { 00:16:18.243 "rw_ios_per_sec": 0, 00:16:18.243 "rw_mbytes_per_sec": 0, 00:16:18.243 "r_mbytes_per_sec": 0, 00:16:18.243 "w_mbytes_per_sec": 0 00:16:18.243 }, 00:16:18.243 "claimed": true, 00:16:18.243 "claim_type": "exclusive_write", 00:16:18.243 "zoned": false, 00:16:18.243 "supported_io_types": { 00:16:18.243 "read": true, 00:16:18.243 "write": true, 00:16:18.243 "unmap": true, 00:16:18.243 "flush": true, 00:16:18.243 "reset": true, 00:16:18.243 "nvme_admin": false, 00:16:18.243 "nvme_io": false, 00:16:18.243 "nvme_io_md": false, 00:16:18.243 "write_zeroes": true, 00:16:18.243 "zcopy": true, 00:16:18.243 "get_zone_info": false, 00:16:18.243 "zone_management": false, 00:16:18.243 "zone_append": false, 00:16:18.243 "compare": false, 00:16:18.243 "compare_and_write": false, 00:16:18.243 "abort": true, 00:16:18.243 "seek_hole": false, 00:16:18.243 "seek_data": false, 00:16:18.243 "copy": true, 00:16:18.243 "nvme_iov_md": false 00:16:18.243 }, 00:16:18.243 "memory_domains": [ 00:16:18.243 { 00:16:18.243 "dma_device_id": "system", 00:16:18.243 "dma_device_type": 1 00:16:18.243 }, 00:16:18.243 { 00:16:18.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.243 "dma_device_type": 2 00:16:18.243 } 00:16:18.243 ], 00:16:18.243 "driver_specific": {} 00:16:18.243 } 00:16:18.243 ] 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.243 "name": "Existed_Raid", 00:16:18.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.243 "strip_size_kb": 64, 00:16:18.243 "state": "configuring", 00:16:18.243 "raid_level": "raid5f", 00:16:18.243 "superblock": false, 00:16:18.243 "num_base_bdevs": 3, 00:16:18.243 "num_base_bdevs_discovered": 1, 00:16:18.243 "num_base_bdevs_operational": 3, 00:16:18.243 "base_bdevs_list": [ 00:16:18.243 { 00:16:18.243 "name": "BaseBdev1", 00:16:18.243 "uuid": "03f61570-9fd0-4af2-9b72-4267e896b175", 00:16:18.243 "is_configured": true, 00:16:18.243 "data_offset": 0, 00:16:18.243 "data_size": 65536 00:16:18.243 }, 00:16:18.243 { 00:16:18.243 "name": "BaseBdev2", 00:16:18.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.243 "is_configured": false, 00:16:18.243 "data_offset": 0, 00:16:18.243 "data_size": 0 00:16:18.243 }, 00:16:18.243 { 00:16:18.243 "name": "BaseBdev3", 00:16:18.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.243 "is_configured": false, 00:16:18.243 "data_offset": 0, 00:16:18.243 "data_size": 0 00:16:18.243 } 00:16:18.243 ] 00:16:18.243 }' 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.243 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 [2024-11-20 14:33:19.595702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:18.811 [2024-11-20 14:33:19.595783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 [2024-11-20 14:33:19.607745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.811 [2024-11-20 14:33:19.610215] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.811 [2024-11-20 14:33:19.610270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.811 [2024-11-20 14:33:19.610287] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.811 [2024-11-20 14:33:19.610302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.811 "name": "Existed_Raid", 00:16:18.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.811 "strip_size_kb": 64, 00:16:18.811 "state": "configuring", 00:16:18.811 "raid_level": "raid5f", 00:16:18.811 "superblock": false, 00:16:18.811 "num_base_bdevs": 3, 00:16:18.811 "num_base_bdevs_discovered": 1, 00:16:18.811 "num_base_bdevs_operational": 3, 00:16:18.811 "base_bdevs_list": [ 00:16:18.811 { 00:16:18.811 "name": "BaseBdev1", 00:16:18.811 "uuid": "03f61570-9fd0-4af2-9b72-4267e896b175", 00:16:18.811 "is_configured": true, 00:16:18.811 "data_offset": 0, 00:16:18.811 "data_size": 65536 00:16:18.811 }, 00:16:18.811 { 00:16:18.811 "name": "BaseBdev2", 00:16:18.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.811 "is_configured": false, 00:16:18.811 "data_offset": 0, 00:16:18.811 "data_size": 0 00:16:18.811 }, 00:16:18.811 { 00:16:18.811 "name": "BaseBdev3", 00:16:18.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.811 "is_configured": false, 00:16:18.811 "data_offset": 0, 00:16:18.811 "data_size": 0 00:16:18.811 } 00:16:18.811 ] 00:16:18.811 }' 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.811 14:33:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.070 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.070 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.070 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.328 [2024-11-20 14:33:20.151942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.328 BaseBdev2 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.328 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.329 [ 00:16:19.329 { 00:16:19.329 "name": "BaseBdev2", 00:16:19.329 "aliases": [ 00:16:19.329 "0756b169-f883-4cf1-8438-1df7a10c2475" 00:16:19.329 ], 00:16:19.329 "product_name": "Malloc disk", 00:16:19.329 "block_size": 512, 00:16:19.329 "num_blocks": 65536, 00:16:19.329 "uuid": "0756b169-f883-4cf1-8438-1df7a10c2475", 00:16:19.329 "assigned_rate_limits": { 00:16:19.329 "rw_ios_per_sec": 0, 00:16:19.329 "rw_mbytes_per_sec": 0, 00:16:19.329 "r_mbytes_per_sec": 0, 00:16:19.329 "w_mbytes_per_sec": 0 00:16:19.329 }, 00:16:19.329 "claimed": true, 00:16:19.329 "claim_type": "exclusive_write", 00:16:19.329 "zoned": false, 00:16:19.329 "supported_io_types": { 00:16:19.329 "read": true, 00:16:19.329 "write": true, 00:16:19.329 "unmap": true, 00:16:19.329 "flush": true, 00:16:19.329 "reset": true, 00:16:19.329 "nvme_admin": false, 00:16:19.329 "nvme_io": false, 00:16:19.329 "nvme_io_md": false, 00:16:19.329 "write_zeroes": true, 00:16:19.329 "zcopy": true, 00:16:19.329 "get_zone_info": false, 00:16:19.329 "zone_management": false, 00:16:19.329 "zone_append": false, 00:16:19.329 "compare": false, 00:16:19.329 "compare_and_write": false, 00:16:19.329 "abort": true, 00:16:19.329 "seek_hole": false, 00:16:19.329 "seek_data": false, 00:16:19.329 "copy": true, 00:16:19.329 "nvme_iov_md": false 00:16:19.329 }, 00:16:19.329 "memory_domains": [ 00:16:19.329 { 00:16:19.329 "dma_device_id": "system", 00:16:19.329 "dma_device_type": 1 00:16:19.329 }, 00:16:19.329 { 00:16:19.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.329 "dma_device_type": 2 00:16:19.329 } 00:16:19.329 ], 00:16:19.329 "driver_specific": {} 00:16:19.329 } 00:16:19.329 ] 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.329 "name": "Existed_Raid", 00:16:19.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.329 "strip_size_kb": 64, 00:16:19.329 "state": "configuring", 00:16:19.329 "raid_level": "raid5f", 00:16:19.329 "superblock": false, 00:16:19.329 "num_base_bdevs": 3, 00:16:19.329 "num_base_bdevs_discovered": 2, 00:16:19.329 "num_base_bdevs_operational": 3, 00:16:19.329 "base_bdevs_list": [ 00:16:19.329 { 00:16:19.329 "name": "BaseBdev1", 00:16:19.329 "uuid": "03f61570-9fd0-4af2-9b72-4267e896b175", 00:16:19.329 "is_configured": true, 00:16:19.329 "data_offset": 0, 00:16:19.329 "data_size": 65536 00:16:19.329 }, 00:16:19.329 { 00:16:19.329 "name": "BaseBdev2", 00:16:19.329 "uuid": "0756b169-f883-4cf1-8438-1df7a10c2475", 00:16:19.329 "is_configured": true, 00:16:19.329 "data_offset": 0, 00:16:19.329 "data_size": 65536 00:16:19.329 }, 00:16:19.329 { 00:16:19.329 "name": "BaseBdev3", 00:16:19.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.329 "is_configured": false, 00:16:19.329 "data_offset": 0, 00:16:19.329 "data_size": 0 00:16:19.329 } 00:16:19.329 ] 00:16:19.329 }' 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.329 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.896 [2024-11-20 14:33:20.742102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.896 [2024-11-20 14:33:20.742259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:19.896 [2024-11-20 14:33:20.742288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:19.896 [2024-11-20 14:33:20.742649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:19.896 [2024-11-20 14:33:20.748125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:19.896 [2024-11-20 14:33:20.748154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:19.896 [2024-11-20 14:33:20.748529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.896 BaseBdev3 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.896 [ 00:16:19.896 { 00:16:19.896 "name": "BaseBdev3", 00:16:19.896 "aliases": [ 00:16:19.896 "a27c679e-0c85-47ea-a807-a812e11de02d" 00:16:19.896 ], 00:16:19.896 "product_name": "Malloc disk", 00:16:19.896 "block_size": 512, 00:16:19.896 "num_blocks": 65536, 00:16:19.896 "uuid": "a27c679e-0c85-47ea-a807-a812e11de02d", 00:16:19.896 "assigned_rate_limits": { 00:16:19.896 "rw_ios_per_sec": 0, 00:16:19.896 "rw_mbytes_per_sec": 0, 00:16:19.896 "r_mbytes_per_sec": 0, 00:16:19.896 "w_mbytes_per_sec": 0 00:16:19.896 }, 00:16:19.896 "claimed": true, 00:16:19.896 "claim_type": "exclusive_write", 00:16:19.896 "zoned": false, 00:16:19.896 "supported_io_types": { 00:16:19.896 "read": true, 00:16:19.896 "write": true, 00:16:19.896 "unmap": true, 00:16:19.896 "flush": true, 00:16:19.896 "reset": true, 00:16:19.896 "nvme_admin": false, 00:16:19.896 "nvme_io": false, 00:16:19.896 "nvme_io_md": false, 00:16:19.896 "write_zeroes": true, 00:16:19.896 "zcopy": true, 00:16:19.896 "get_zone_info": false, 00:16:19.896 "zone_management": false, 00:16:19.896 "zone_append": false, 00:16:19.896 "compare": false, 00:16:19.896 "compare_and_write": false, 00:16:19.896 "abort": true, 00:16:19.896 "seek_hole": false, 00:16:19.896 "seek_data": false, 00:16:19.896 "copy": true, 00:16:19.896 "nvme_iov_md": false 00:16:19.896 }, 00:16:19.896 "memory_domains": [ 00:16:19.896 { 00:16:19.896 "dma_device_id": "system", 00:16:19.896 "dma_device_type": 1 00:16:19.896 }, 00:16:19.896 { 00:16:19.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.896 "dma_device_type": 2 00:16:19.896 } 00:16:19.896 ], 00:16:19.896 "driver_specific": {} 00:16:19.896 } 00:16:19.896 ] 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.896 "name": "Existed_Raid", 00:16:19.896 "uuid": "4c1ebd97-c84a-4645-9d1a-561cb61a1a76", 00:16:19.896 "strip_size_kb": 64, 00:16:19.896 "state": "online", 00:16:19.896 "raid_level": "raid5f", 00:16:19.896 "superblock": false, 00:16:19.896 "num_base_bdevs": 3, 00:16:19.896 "num_base_bdevs_discovered": 3, 00:16:19.896 "num_base_bdevs_operational": 3, 00:16:19.896 "base_bdevs_list": [ 00:16:19.896 { 00:16:19.896 "name": "BaseBdev1", 00:16:19.896 "uuid": "03f61570-9fd0-4af2-9b72-4267e896b175", 00:16:19.896 "is_configured": true, 00:16:19.896 "data_offset": 0, 00:16:19.896 "data_size": 65536 00:16:19.896 }, 00:16:19.896 { 00:16:19.896 "name": "BaseBdev2", 00:16:19.896 "uuid": "0756b169-f883-4cf1-8438-1df7a10c2475", 00:16:19.896 "is_configured": true, 00:16:19.896 "data_offset": 0, 00:16:19.896 "data_size": 65536 00:16:19.896 }, 00:16:19.896 { 00:16:19.896 "name": "BaseBdev3", 00:16:19.896 "uuid": "a27c679e-0c85-47ea-a807-a812e11de02d", 00:16:19.896 "is_configured": true, 00:16:19.896 "data_offset": 0, 00:16:19.896 "data_size": 65536 00:16:19.896 } 00:16:19.896 ] 00:16:19.896 }' 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.896 14:33:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.462 [2024-11-20 14:33:21.306556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.462 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:20.462 "name": "Existed_Raid", 00:16:20.462 "aliases": [ 00:16:20.462 "4c1ebd97-c84a-4645-9d1a-561cb61a1a76" 00:16:20.462 ], 00:16:20.462 "product_name": "Raid Volume", 00:16:20.462 "block_size": 512, 00:16:20.462 "num_blocks": 131072, 00:16:20.462 "uuid": "4c1ebd97-c84a-4645-9d1a-561cb61a1a76", 00:16:20.462 "assigned_rate_limits": { 00:16:20.462 "rw_ios_per_sec": 0, 00:16:20.462 "rw_mbytes_per_sec": 0, 00:16:20.462 "r_mbytes_per_sec": 0, 00:16:20.462 "w_mbytes_per_sec": 0 00:16:20.462 }, 00:16:20.462 "claimed": false, 00:16:20.462 "zoned": false, 00:16:20.462 "supported_io_types": { 00:16:20.462 "read": true, 00:16:20.462 "write": true, 00:16:20.462 "unmap": false, 00:16:20.462 "flush": false, 00:16:20.462 "reset": true, 00:16:20.462 "nvme_admin": false, 00:16:20.462 "nvme_io": false, 00:16:20.462 "nvme_io_md": false, 00:16:20.462 "write_zeroes": true, 00:16:20.462 "zcopy": false, 00:16:20.462 "get_zone_info": false, 00:16:20.462 "zone_management": false, 00:16:20.462 "zone_append": false, 00:16:20.462 "compare": false, 00:16:20.462 "compare_and_write": false, 00:16:20.462 "abort": false, 00:16:20.462 "seek_hole": false, 00:16:20.462 "seek_data": false, 00:16:20.462 "copy": false, 00:16:20.462 "nvme_iov_md": false 00:16:20.462 }, 00:16:20.462 "driver_specific": { 00:16:20.462 "raid": { 00:16:20.462 "uuid": "4c1ebd97-c84a-4645-9d1a-561cb61a1a76", 00:16:20.462 "strip_size_kb": 64, 00:16:20.462 "state": "online", 00:16:20.462 "raid_level": "raid5f", 00:16:20.462 "superblock": false, 00:16:20.462 "num_base_bdevs": 3, 00:16:20.462 "num_base_bdevs_discovered": 3, 00:16:20.462 "num_base_bdevs_operational": 3, 00:16:20.462 "base_bdevs_list": [ 00:16:20.462 { 00:16:20.462 "name": "BaseBdev1", 00:16:20.462 "uuid": "03f61570-9fd0-4af2-9b72-4267e896b175", 00:16:20.462 "is_configured": true, 00:16:20.462 "data_offset": 0, 00:16:20.462 "data_size": 65536 00:16:20.462 }, 00:16:20.462 { 00:16:20.462 "name": "BaseBdev2", 00:16:20.462 "uuid": "0756b169-f883-4cf1-8438-1df7a10c2475", 00:16:20.462 "is_configured": true, 00:16:20.462 "data_offset": 0, 00:16:20.462 "data_size": 65536 00:16:20.462 }, 00:16:20.462 { 00:16:20.462 "name": "BaseBdev3", 00:16:20.462 "uuid": "a27c679e-0c85-47ea-a807-a812e11de02d", 00:16:20.462 "is_configured": true, 00:16:20.462 "data_offset": 0, 00:16:20.462 "data_size": 65536 00:16:20.462 } 00:16:20.462 ] 00:16:20.462 } 00:16:20.462 } 00:16:20.463 }' 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:20.463 BaseBdev2 00:16:20.463 BaseBdev3' 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.463 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 [2024-11-20 14:33:21.614407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.721 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.721 "name": "Existed_Raid", 00:16:20.721 "uuid": "4c1ebd97-c84a-4645-9d1a-561cb61a1a76", 00:16:20.721 "strip_size_kb": 64, 00:16:20.721 "state": "online", 00:16:20.721 "raid_level": "raid5f", 00:16:20.721 "superblock": false, 00:16:20.721 "num_base_bdevs": 3, 00:16:20.721 "num_base_bdevs_discovered": 2, 00:16:20.721 "num_base_bdevs_operational": 2, 00:16:20.721 "base_bdevs_list": [ 00:16:20.721 { 00:16:20.721 "name": null, 00:16:20.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.721 "is_configured": false, 00:16:20.721 "data_offset": 0, 00:16:20.721 "data_size": 65536 00:16:20.721 }, 00:16:20.721 { 00:16:20.721 "name": "BaseBdev2", 00:16:20.721 "uuid": "0756b169-f883-4cf1-8438-1df7a10c2475", 00:16:20.721 "is_configured": true, 00:16:20.721 "data_offset": 0, 00:16:20.721 "data_size": 65536 00:16:20.721 }, 00:16:20.721 { 00:16:20.721 "name": "BaseBdev3", 00:16:20.721 "uuid": "a27c679e-0c85-47ea-a807-a812e11de02d", 00:16:20.721 "is_configured": true, 00:16:20.721 "data_offset": 0, 00:16:20.721 "data_size": 65536 00:16:20.721 } 00:16:20.721 ] 00:16:20.721 }' 00:16:20.979 14:33:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.979 14:33:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.237 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.524 [2024-11-20 14:33:22.295206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.524 [2024-11-20 14:33:22.295336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.524 [2024-11-20 14:33:22.382709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.524 [2024-11-20 14:33:22.438788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:21.524 [2024-11-20 14:33:22.438981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:21.524 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.785 BaseBdev2 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.785 [ 00:16:21.785 { 00:16:21.785 "name": "BaseBdev2", 00:16:21.785 "aliases": [ 00:16:21.785 "838771ab-5341-4a7a-9e3e-c87d475d96e0" 00:16:21.785 ], 00:16:21.785 "product_name": "Malloc disk", 00:16:21.785 "block_size": 512, 00:16:21.785 "num_blocks": 65536, 00:16:21.785 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:21.785 "assigned_rate_limits": { 00:16:21.785 "rw_ios_per_sec": 0, 00:16:21.785 "rw_mbytes_per_sec": 0, 00:16:21.785 "r_mbytes_per_sec": 0, 00:16:21.785 "w_mbytes_per_sec": 0 00:16:21.785 }, 00:16:21.785 "claimed": false, 00:16:21.785 "zoned": false, 00:16:21.785 "supported_io_types": { 00:16:21.785 "read": true, 00:16:21.785 "write": true, 00:16:21.785 "unmap": true, 00:16:21.785 "flush": true, 00:16:21.785 "reset": true, 00:16:21.785 "nvme_admin": false, 00:16:21.785 "nvme_io": false, 00:16:21.785 "nvme_io_md": false, 00:16:21.785 "write_zeroes": true, 00:16:21.785 "zcopy": true, 00:16:21.785 "get_zone_info": false, 00:16:21.785 "zone_management": false, 00:16:21.785 "zone_append": false, 00:16:21.785 "compare": false, 00:16:21.785 "compare_and_write": false, 00:16:21.785 "abort": true, 00:16:21.785 "seek_hole": false, 00:16:21.785 "seek_data": false, 00:16:21.785 "copy": true, 00:16:21.785 "nvme_iov_md": false 00:16:21.785 }, 00:16:21.785 "memory_domains": [ 00:16:21.785 { 00:16:21.785 "dma_device_id": "system", 00:16:21.785 "dma_device_type": 1 00:16:21.785 }, 00:16:21.785 { 00:16:21.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.785 "dma_device_type": 2 00:16:21.785 } 00:16:21.785 ], 00:16:21.785 "driver_specific": {} 00:16:21.785 } 00:16:21.785 ] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.785 BaseBdev3 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.785 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.786 [ 00:16:21.786 { 00:16:21.786 "name": "BaseBdev3", 00:16:21.786 "aliases": [ 00:16:21.786 "2ae1337b-7976-4735-89e8-8991b09efddf" 00:16:21.786 ], 00:16:21.786 "product_name": "Malloc disk", 00:16:21.786 "block_size": 512, 00:16:21.786 "num_blocks": 65536, 00:16:21.786 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:21.786 "assigned_rate_limits": { 00:16:21.786 "rw_ios_per_sec": 0, 00:16:21.786 "rw_mbytes_per_sec": 0, 00:16:21.786 "r_mbytes_per_sec": 0, 00:16:21.786 "w_mbytes_per_sec": 0 00:16:21.786 }, 00:16:21.786 "claimed": false, 00:16:21.786 "zoned": false, 00:16:21.786 "supported_io_types": { 00:16:21.786 "read": true, 00:16:21.786 "write": true, 00:16:21.786 "unmap": true, 00:16:21.786 "flush": true, 00:16:21.786 "reset": true, 00:16:21.786 "nvme_admin": false, 00:16:21.786 "nvme_io": false, 00:16:21.786 "nvme_io_md": false, 00:16:21.786 "write_zeroes": true, 00:16:21.786 "zcopy": true, 00:16:21.786 "get_zone_info": false, 00:16:21.786 "zone_management": false, 00:16:21.786 "zone_append": false, 00:16:21.786 "compare": false, 00:16:21.786 "compare_and_write": false, 00:16:21.786 "abort": true, 00:16:21.786 "seek_hole": false, 00:16:21.786 "seek_data": false, 00:16:21.786 "copy": true, 00:16:21.786 "nvme_iov_md": false 00:16:21.786 }, 00:16:21.786 "memory_domains": [ 00:16:21.786 { 00:16:21.786 "dma_device_id": "system", 00:16:21.786 "dma_device_type": 1 00:16:21.786 }, 00:16:21.786 { 00:16:21.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.786 "dma_device_type": 2 00:16:21.786 } 00:16:21.786 ], 00:16:21.786 "driver_specific": {} 00:16:21.786 } 00:16:21.786 ] 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.786 [2024-11-20 14:33:22.739209] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:21.786 [2024-11-20 14:33:22.739439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:21.786 [2024-11-20 14:33:22.739494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:21.786 [2024-11-20 14:33:22.742097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.786 "name": "Existed_Raid", 00:16:21.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.786 "strip_size_kb": 64, 00:16:21.786 "state": "configuring", 00:16:21.786 "raid_level": "raid5f", 00:16:21.786 "superblock": false, 00:16:21.786 "num_base_bdevs": 3, 00:16:21.786 "num_base_bdevs_discovered": 2, 00:16:21.786 "num_base_bdevs_operational": 3, 00:16:21.786 "base_bdevs_list": [ 00:16:21.786 { 00:16:21.786 "name": "BaseBdev1", 00:16:21.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.786 "is_configured": false, 00:16:21.786 "data_offset": 0, 00:16:21.786 "data_size": 0 00:16:21.786 }, 00:16:21.786 { 00:16:21.786 "name": "BaseBdev2", 00:16:21.786 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:21.786 "is_configured": true, 00:16:21.786 "data_offset": 0, 00:16:21.786 "data_size": 65536 00:16:21.786 }, 00:16:21.786 { 00:16:21.786 "name": "BaseBdev3", 00:16:21.786 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:21.786 "is_configured": true, 00:16:21.786 "data_offset": 0, 00:16:21.786 "data_size": 65536 00:16:21.786 } 00:16:21.786 ] 00:16:21.786 }' 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.786 14:33:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.352 [2024-11-20 14:33:23.259333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.352 "name": "Existed_Raid", 00:16:22.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.352 "strip_size_kb": 64, 00:16:22.352 "state": "configuring", 00:16:22.352 "raid_level": "raid5f", 00:16:22.352 "superblock": false, 00:16:22.352 "num_base_bdevs": 3, 00:16:22.352 "num_base_bdevs_discovered": 1, 00:16:22.352 "num_base_bdevs_operational": 3, 00:16:22.352 "base_bdevs_list": [ 00:16:22.352 { 00:16:22.352 "name": "BaseBdev1", 00:16:22.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.352 "is_configured": false, 00:16:22.352 "data_offset": 0, 00:16:22.352 "data_size": 0 00:16:22.352 }, 00:16:22.352 { 00:16:22.352 "name": null, 00:16:22.352 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:22.352 "is_configured": false, 00:16:22.352 "data_offset": 0, 00:16:22.352 "data_size": 65536 00:16:22.352 }, 00:16:22.352 { 00:16:22.352 "name": "BaseBdev3", 00:16:22.352 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:22.352 "is_configured": true, 00:16:22.352 "data_offset": 0, 00:16:22.352 "data_size": 65536 00:16:22.352 } 00:16:22.352 ] 00:16:22.352 }' 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.352 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.918 [2024-11-20 14:33:23.882519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.918 BaseBdev1 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.918 [ 00:16:22.918 { 00:16:22.918 "name": "BaseBdev1", 00:16:22.918 "aliases": [ 00:16:22.918 "bcd9fcdf-883a-4726-8e63-b2a0425e84d8" 00:16:22.918 ], 00:16:22.918 "product_name": "Malloc disk", 00:16:22.918 "block_size": 512, 00:16:22.918 "num_blocks": 65536, 00:16:22.918 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:22.918 "assigned_rate_limits": { 00:16:22.918 "rw_ios_per_sec": 0, 00:16:22.918 "rw_mbytes_per_sec": 0, 00:16:22.918 "r_mbytes_per_sec": 0, 00:16:22.918 "w_mbytes_per_sec": 0 00:16:22.918 }, 00:16:22.918 "claimed": true, 00:16:22.918 "claim_type": "exclusive_write", 00:16:22.918 "zoned": false, 00:16:22.918 "supported_io_types": { 00:16:22.918 "read": true, 00:16:22.918 "write": true, 00:16:22.918 "unmap": true, 00:16:22.918 "flush": true, 00:16:22.918 "reset": true, 00:16:22.918 "nvme_admin": false, 00:16:22.918 "nvme_io": false, 00:16:22.918 "nvme_io_md": false, 00:16:22.918 "write_zeroes": true, 00:16:22.918 "zcopy": true, 00:16:22.918 "get_zone_info": false, 00:16:22.918 "zone_management": false, 00:16:22.918 "zone_append": false, 00:16:22.918 "compare": false, 00:16:22.918 "compare_and_write": false, 00:16:22.918 "abort": true, 00:16:22.918 "seek_hole": false, 00:16:22.918 "seek_data": false, 00:16:22.918 "copy": true, 00:16:22.918 "nvme_iov_md": false 00:16:22.918 }, 00:16:22.918 "memory_domains": [ 00:16:22.918 { 00:16:22.918 "dma_device_id": "system", 00:16:22.918 "dma_device_type": 1 00:16:22.918 }, 00:16:22.918 { 00:16:22.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.918 "dma_device_type": 2 00:16:22.918 } 00:16:22.918 ], 00:16:22.918 "driver_specific": {} 00:16:22.918 } 00:16:22.918 ] 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.918 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.178 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.178 "name": "Existed_Raid", 00:16:23.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.178 "strip_size_kb": 64, 00:16:23.178 "state": "configuring", 00:16:23.178 "raid_level": "raid5f", 00:16:23.178 "superblock": false, 00:16:23.178 "num_base_bdevs": 3, 00:16:23.178 "num_base_bdevs_discovered": 2, 00:16:23.178 "num_base_bdevs_operational": 3, 00:16:23.178 "base_bdevs_list": [ 00:16:23.178 { 00:16:23.178 "name": "BaseBdev1", 00:16:23.178 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:23.178 "is_configured": true, 00:16:23.178 "data_offset": 0, 00:16:23.178 "data_size": 65536 00:16:23.178 }, 00:16:23.178 { 00:16:23.178 "name": null, 00:16:23.178 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:23.178 "is_configured": false, 00:16:23.178 "data_offset": 0, 00:16:23.178 "data_size": 65536 00:16:23.178 }, 00:16:23.178 { 00:16:23.178 "name": "BaseBdev3", 00:16:23.178 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:23.178 "is_configured": true, 00:16:23.178 "data_offset": 0, 00:16:23.178 "data_size": 65536 00:16:23.178 } 00:16:23.178 ] 00:16:23.178 }' 00:16:23.178 14:33:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.178 14:33:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.436 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.436 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.436 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.436 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:23.436 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.695 [2024-11-20 14:33:24.518781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.695 "name": "Existed_Raid", 00:16:23.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.695 "strip_size_kb": 64, 00:16:23.695 "state": "configuring", 00:16:23.695 "raid_level": "raid5f", 00:16:23.695 "superblock": false, 00:16:23.695 "num_base_bdevs": 3, 00:16:23.695 "num_base_bdevs_discovered": 1, 00:16:23.695 "num_base_bdevs_operational": 3, 00:16:23.695 "base_bdevs_list": [ 00:16:23.695 { 00:16:23.695 "name": "BaseBdev1", 00:16:23.695 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:23.695 "is_configured": true, 00:16:23.695 "data_offset": 0, 00:16:23.695 "data_size": 65536 00:16:23.695 }, 00:16:23.695 { 00:16:23.695 "name": null, 00:16:23.695 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:23.695 "is_configured": false, 00:16:23.695 "data_offset": 0, 00:16:23.695 "data_size": 65536 00:16:23.695 }, 00:16:23.695 { 00:16:23.695 "name": null, 00:16:23.695 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:23.695 "is_configured": false, 00:16:23.695 "data_offset": 0, 00:16:23.695 "data_size": 65536 00:16:23.695 } 00:16:23.695 ] 00:16:23.695 }' 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.695 14:33:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.261 [2024-11-20 14:33:25.134976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.261 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.262 "name": "Existed_Raid", 00:16:24.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.262 "strip_size_kb": 64, 00:16:24.262 "state": "configuring", 00:16:24.262 "raid_level": "raid5f", 00:16:24.262 "superblock": false, 00:16:24.262 "num_base_bdevs": 3, 00:16:24.262 "num_base_bdevs_discovered": 2, 00:16:24.262 "num_base_bdevs_operational": 3, 00:16:24.262 "base_bdevs_list": [ 00:16:24.262 { 00:16:24.262 "name": "BaseBdev1", 00:16:24.262 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:24.262 "is_configured": true, 00:16:24.262 "data_offset": 0, 00:16:24.262 "data_size": 65536 00:16:24.262 }, 00:16:24.262 { 00:16:24.262 "name": null, 00:16:24.262 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:24.262 "is_configured": false, 00:16:24.262 "data_offset": 0, 00:16:24.262 "data_size": 65536 00:16:24.262 }, 00:16:24.262 { 00:16:24.262 "name": "BaseBdev3", 00:16:24.262 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:24.262 "is_configured": true, 00:16:24.262 "data_offset": 0, 00:16:24.262 "data_size": 65536 00:16:24.262 } 00:16:24.262 ] 00:16:24.262 }' 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.262 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.828 [2024-11-20 14:33:25.679119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.828 "name": "Existed_Raid", 00:16:24.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.828 "strip_size_kb": 64, 00:16:24.828 "state": "configuring", 00:16:24.828 "raid_level": "raid5f", 00:16:24.828 "superblock": false, 00:16:24.828 "num_base_bdevs": 3, 00:16:24.828 "num_base_bdevs_discovered": 1, 00:16:24.828 "num_base_bdevs_operational": 3, 00:16:24.828 "base_bdevs_list": [ 00:16:24.828 { 00:16:24.828 "name": null, 00:16:24.828 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:24.828 "is_configured": false, 00:16:24.828 "data_offset": 0, 00:16:24.828 "data_size": 65536 00:16:24.828 }, 00:16:24.828 { 00:16:24.828 "name": null, 00:16:24.828 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:24.828 "is_configured": false, 00:16:24.828 "data_offset": 0, 00:16:24.828 "data_size": 65536 00:16:24.828 }, 00:16:24.828 { 00:16:24.828 "name": "BaseBdev3", 00:16:24.828 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:24.828 "is_configured": true, 00:16:24.828 "data_offset": 0, 00:16:24.828 "data_size": 65536 00:16:24.828 } 00:16:24.828 ] 00:16:24.828 }' 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.828 14:33:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 [2024-11-20 14:33:26.324907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.394 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.394 "name": "Existed_Raid", 00:16:25.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.394 "strip_size_kb": 64, 00:16:25.394 "state": "configuring", 00:16:25.394 "raid_level": "raid5f", 00:16:25.395 "superblock": false, 00:16:25.395 "num_base_bdevs": 3, 00:16:25.395 "num_base_bdevs_discovered": 2, 00:16:25.395 "num_base_bdevs_operational": 3, 00:16:25.395 "base_bdevs_list": [ 00:16:25.395 { 00:16:25.395 "name": null, 00:16:25.395 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:25.395 "is_configured": false, 00:16:25.395 "data_offset": 0, 00:16:25.395 "data_size": 65536 00:16:25.395 }, 00:16:25.395 { 00:16:25.395 "name": "BaseBdev2", 00:16:25.395 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:25.395 "is_configured": true, 00:16:25.395 "data_offset": 0, 00:16:25.395 "data_size": 65536 00:16:25.395 }, 00:16:25.395 { 00:16:25.395 "name": "BaseBdev3", 00:16:25.395 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:25.395 "is_configured": true, 00:16:25.395 "data_offset": 0, 00:16:25.395 "data_size": 65536 00:16:25.395 } 00:16:25.395 ] 00:16:25.395 }' 00:16:25.395 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.395 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bcd9fcdf-883a-4726-8e63-b2a0425e84d8 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.962 [2024-11-20 14:33:26.953109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:25.962 [2024-11-20 14:33:26.953175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:25.962 [2024-11-20 14:33:26.953191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:25.962 [2024-11-20 14:33:26.953500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:25.962 [2024-11-20 14:33:26.958507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:25.962 [2024-11-20 14:33:26.958533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:25.962 [2024-11-20 14:33:26.958868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.962 NewBaseBdev 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.962 [ 00:16:25.962 { 00:16:25.962 "name": "NewBaseBdev", 00:16:25.962 "aliases": [ 00:16:25.962 "bcd9fcdf-883a-4726-8e63-b2a0425e84d8" 00:16:25.962 ], 00:16:25.962 "product_name": "Malloc disk", 00:16:25.962 "block_size": 512, 00:16:25.962 "num_blocks": 65536, 00:16:25.962 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:25.962 "assigned_rate_limits": { 00:16:25.962 "rw_ios_per_sec": 0, 00:16:25.962 "rw_mbytes_per_sec": 0, 00:16:25.962 "r_mbytes_per_sec": 0, 00:16:25.962 "w_mbytes_per_sec": 0 00:16:25.962 }, 00:16:25.962 "claimed": true, 00:16:25.962 "claim_type": "exclusive_write", 00:16:25.962 "zoned": false, 00:16:25.962 "supported_io_types": { 00:16:25.962 "read": true, 00:16:25.962 "write": true, 00:16:25.962 "unmap": true, 00:16:25.962 "flush": true, 00:16:25.962 "reset": true, 00:16:25.962 "nvme_admin": false, 00:16:25.962 "nvme_io": false, 00:16:25.962 "nvme_io_md": false, 00:16:25.962 "write_zeroes": true, 00:16:25.962 "zcopy": true, 00:16:25.962 "get_zone_info": false, 00:16:25.962 "zone_management": false, 00:16:25.962 "zone_append": false, 00:16:25.962 "compare": false, 00:16:25.962 "compare_and_write": false, 00:16:25.962 "abort": true, 00:16:25.962 "seek_hole": false, 00:16:25.962 "seek_data": false, 00:16:25.962 "copy": true, 00:16:25.962 "nvme_iov_md": false 00:16:25.962 }, 00:16:25.962 "memory_domains": [ 00:16:25.962 { 00:16:25.962 "dma_device_id": "system", 00:16:25.962 "dma_device_type": 1 00:16:25.962 }, 00:16:25.962 { 00:16:25.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.962 "dma_device_type": 2 00:16:25.962 } 00:16:25.962 ], 00:16:25.962 "driver_specific": {} 00:16:25.962 } 00:16:25.962 ] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.962 14:33:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.962 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.222 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.222 "name": "Existed_Raid", 00:16:26.222 "uuid": "276b13a3-3b87-4f13-9be7-39619ab47b0a", 00:16:26.222 "strip_size_kb": 64, 00:16:26.222 "state": "online", 00:16:26.222 "raid_level": "raid5f", 00:16:26.222 "superblock": false, 00:16:26.222 "num_base_bdevs": 3, 00:16:26.222 "num_base_bdevs_discovered": 3, 00:16:26.222 "num_base_bdevs_operational": 3, 00:16:26.222 "base_bdevs_list": [ 00:16:26.222 { 00:16:26.222 "name": "NewBaseBdev", 00:16:26.222 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:26.222 "is_configured": true, 00:16:26.222 "data_offset": 0, 00:16:26.222 "data_size": 65536 00:16:26.222 }, 00:16:26.222 { 00:16:26.222 "name": "BaseBdev2", 00:16:26.222 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:26.222 "is_configured": true, 00:16:26.222 "data_offset": 0, 00:16:26.222 "data_size": 65536 00:16:26.222 }, 00:16:26.222 { 00:16:26.222 "name": "BaseBdev3", 00:16:26.222 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:26.222 "is_configured": true, 00:16:26.222 "data_offset": 0, 00:16:26.222 "data_size": 65536 00:16:26.222 } 00:16:26.222 ] 00:16:26.222 }' 00:16:26.222 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.222 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.481 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.481 [2024-11-20 14:33:27.524922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.739 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.740 "name": "Existed_Raid", 00:16:26.740 "aliases": [ 00:16:26.740 "276b13a3-3b87-4f13-9be7-39619ab47b0a" 00:16:26.740 ], 00:16:26.740 "product_name": "Raid Volume", 00:16:26.740 "block_size": 512, 00:16:26.740 "num_blocks": 131072, 00:16:26.740 "uuid": "276b13a3-3b87-4f13-9be7-39619ab47b0a", 00:16:26.740 "assigned_rate_limits": { 00:16:26.740 "rw_ios_per_sec": 0, 00:16:26.740 "rw_mbytes_per_sec": 0, 00:16:26.740 "r_mbytes_per_sec": 0, 00:16:26.740 "w_mbytes_per_sec": 0 00:16:26.740 }, 00:16:26.740 "claimed": false, 00:16:26.740 "zoned": false, 00:16:26.740 "supported_io_types": { 00:16:26.740 "read": true, 00:16:26.740 "write": true, 00:16:26.740 "unmap": false, 00:16:26.740 "flush": false, 00:16:26.740 "reset": true, 00:16:26.740 "nvme_admin": false, 00:16:26.740 "nvme_io": false, 00:16:26.740 "nvme_io_md": false, 00:16:26.740 "write_zeroes": true, 00:16:26.740 "zcopy": false, 00:16:26.740 "get_zone_info": false, 00:16:26.740 "zone_management": false, 00:16:26.740 "zone_append": false, 00:16:26.740 "compare": false, 00:16:26.740 "compare_and_write": false, 00:16:26.740 "abort": false, 00:16:26.740 "seek_hole": false, 00:16:26.740 "seek_data": false, 00:16:26.740 "copy": false, 00:16:26.740 "nvme_iov_md": false 00:16:26.740 }, 00:16:26.740 "driver_specific": { 00:16:26.740 "raid": { 00:16:26.740 "uuid": "276b13a3-3b87-4f13-9be7-39619ab47b0a", 00:16:26.740 "strip_size_kb": 64, 00:16:26.740 "state": "online", 00:16:26.740 "raid_level": "raid5f", 00:16:26.740 "superblock": false, 00:16:26.740 "num_base_bdevs": 3, 00:16:26.740 "num_base_bdevs_discovered": 3, 00:16:26.740 "num_base_bdevs_operational": 3, 00:16:26.740 "base_bdevs_list": [ 00:16:26.740 { 00:16:26.740 "name": "NewBaseBdev", 00:16:26.740 "uuid": "bcd9fcdf-883a-4726-8e63-b2a0425e84d8", 00:16:26.740 "is_configured": true, 00:16:26.740 "data_offset": 0, 00:16:26.740 "data_size": 65536 00:16:26.740 }, 00:16:26.740 { 00:16:26.740 "name": "BaseBdev2", 00:16:26.740 "uuid": "838771ab-5341-4a7a-9e3e-c87d475d96e0", 00:16:26.740 "is_configured": true, 00:16:26.740 "data_offset": 0, 00:16:26.740 "data_size": 65536 00:16:26.740 }, 00:16:26.740 { 00:16:26.740 "name": "BaseBdev3", 00:16:26.740 "uuid": "2ae1337b-7976-4735-89e8-8991b09efddf", 00:16:26.740 "is_configured": true, 00:16:26.740 "data_offset": 0, 00:16:26.740 "data_size": 65536 00:16:26.740 } 00:16:26.740 ] 00:16:26.740 } 00:16:26.740 } 00:16:26.740 }' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:26.740 BaseBdev2 00:16:26.740 BaseBdev3' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.740 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.040 [2024-11-20 14:33:27.828746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.040 [2024-11-20 14:33:27.828782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.040 [2024-11-20 14:33:27.828872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.040 [2024-11-20 14:33:27.829256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.040 [2024-11-20 14:33:27.829294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80316 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80316 ']' 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80316 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80316 00:16:27.040 killing process with pid 80316 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80316' 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80316 00:16:27.040 [2024-11-20 14:33:27.867388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.040 14:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80316 00:16:27.299 [2024-11-20 14:33:28.139301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:28.233 00:16:28.233 real 0m11.843s 00:16:28.233 user 0m19.572s 00:16:28.233 sys 0m1.704s 00:16:28.233 ************************************ 00:16:28.233 END TEST raid5f_state_function_test 00:16:28.233 ************************************ 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.233 14:33:29 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:28.233 14:33:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:28.233 14:33:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.233 14:33:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.233 ************************************ 00:16:28.233 START TEST raid5f_state_function_test_sb 00:16:28.233 ************************************ 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:28.233 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:28.234 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:28.234 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:28.234 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:28.234 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:28.234 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:28.491 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80947 00:16:28.492 Process raid pid: 80947 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80947' 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80947 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80947 ']' 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.492 14:33:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.492 [2024-11-20 14:33:29.401873] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:16:28.492 [2024-11-20 14:33:29.402050] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.750 [2024-11-20 14:33:29.586030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.750 [2024-11-20 14:33:29.728403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.008 [2024-11-20 14:33:29.949302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.008 [2024-11-20 14:33:29.949356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.575 [2024-11-20 14:33:30.410285] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.575 [2024-11-20 14:33:30.410356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.575 [2024-11-20 14:33:30.410375] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.575 [2024-11-20 14:33:30.410392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.575 [2024-11-20 14:33:30.410401] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.575 [2024-11-20 14:33:30.410416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.575 "name": "Existed_Raid", 00:16:29.575 "uuid": "c781534c-51e8-4981-9b80-c5507843b96a", 00:16:29.575 "strip_size_kb": 64, 00:16:29.575 "state": "configuring", 00:16:29.575 "raid_level": "raid5f", 00:16:29.575 "superblock": true, 00:16:29.575 "num_base_bdevs": 3, 00:16:29.575 "num_base_bdevs_discovered": 0, 00:16:29.575 "num_base_bdevs_operational": 3, 00:16:29.575 "base_bdevs_list": [ 00:16:29.575 { 00:16:29.575 "name": "BaseBdev1", 00:16:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.575 "is_configured": false, 00:16:29.575 "data_offset": 0, 00:16:29.575 "data_size": 0 00:16:29.575 }, 00:16:29.575 { 00:16:29.575 "name": "BaseBdev2", 00:16:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.575 "is_configured": false, 00:16:29.575 "data_offset": 0, 00:16:29.575 "data_size": 0 00:16:29.575 }, 00:16:29.575 { 00:16:29.575 "name": "BaseBdev3", 00:16:29.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.575 "is_configured": false, 00:16:29.575 "data_offset": 0, 00:16:29.575 "data_size": 0 00:16:29.575 } 00:16:29.575 ] 00:16:29.575 }' 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.575 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.142 [2024-11-20 14:33:30.966647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.142 [2024-11-20 14:33:30.966693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.142 [2024-11-20 14:33:30.974653] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.142 [2024-11-20 14:33:30.974707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.142 [2024-11-20 14:33:30.974723] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.142 [2024-11-20 14:33:30.974739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.142 [2024-11-20 14:33:30.974748] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.142 [2024-11-20 14:33:30.974772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.142 14:33:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.142 [2024-11-20 14:33:31.020534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.142 BaseBdev1 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.142 [ 00:16:30.142 { 00:16:30.142 "name": "BaseBdev1", 00:16:30.142 "aliases": [ 00:16:30.142 "b7864b0a-b747-4980-8509-3af5d8c63779" 00:16:30.142 ], 00:16:30.142 "product_name": "Malloc disk", 00:16:30.142 "block_size": 512, 00:16:30.142 "num_blocks": 65536, 00:16:30.142 "uuid": "b7864b0a-b747-4980-8509-3af5d8c63779", 00:16:30.142 "assigned_rate_limits": { 00:16:30.142 "rw_ios_per_sec": 0, 00:16:30.142 "rw_mbytes_per_sec": 0, 00:16:30.142 "r_mbytes_per_sec": 0, 00:16:30.142 "w_mbytes_per_sec": 0 00:16:30.142 }, 00:16:30.142 "claimed": true, 00:16:30.142 "claim_type": "exclusive_write", 00:16:30.142 "zoned": false, 00:16:30.142 "supported_io_types": { 00:16:30.142 "read": true, 00:16:30.142 "write": true, 00:16:30.142 "unmap": true, 00:16:30.142 "flush": true, 00:16:30.142 "reset": true, 00:16:30.142 "nvme_admin": false, 00:16:30.142 "nvme_io": false, 00:16:30.142 "nvme_io_md": false, 00:16:30.142 "write_zeroes": true, 00:16:30.142 "zcopy": true, 00:16:30.142 "get_zone_info": false, 00:16:30.142 "zone_management": false, 00:16:30.142 "zone_append": false, 00:16:30.142 "compare": false, 00:16:30.142 "compare_and_write": false, 00:16:30.142 "abort": true, 00:16:30.142 "seek_hole": false, 00:16:30.142 "seek_data": false, 00:16:30.142 "copy": true, 00:16:30.142 "nvme_iov_md": false 00:16:30.142 }, 00:16:30.142 "memory_domains": [ 00:16:30.142 { 00:16:30.142 "dma_device_id": "system", 00:16:30.142 "dma_device_type": 1 00:16:30.142 }, 00:16:30.142 { 00:16:30.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.142 "dma_device_type": 2 00:16:30.142 } 00:16:30.142 ], 00:16:30.142 "driver_specific": {} 00:16:30.142 } 00:16:30.142 ] 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.142 "name": "Existed_Raid", 00:16:30.142 "uuid": "be21cc5d-0ee4-44ba-afbe-15688220bfbb", 00:16:30.142 "strip_size_kb": 64, 00:16:30.142 "state": "configuring", 00:16:30.142 "raid_level": "raid5f", 00:16:30.142 "superblock": true, 00:16:30.142 "num_base_bdevs": 3, 00:16:30.142 "num_base_bdevs_discovered": 1, 00:16:30.142 "num_base_bdevs_operational": 3, 00:16:30.142 "base_bdevs_list": [ 00:16:30.142 { 00:16:30.142 "name": "BaseBdev1", 00:16:30.142 "uuid": "b7864b0a-b747-4980-8509-3af5d8c63779", 00:16:30.142 "is_configured": true, 00:16:30.142 "data_offset": 2048, 00:16:30.142 "data_size": 63488 00:16:30.142 }, 00:16:30.142 { 00:16:30.142 "name": "BaseBdev2", 00:16:30.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.142 "is_configured": false, 00:16:30.142 "data_offset": 0, 00:16:30.142 "data_size": 0 00:16:30.142 }, 00:16:30.142 { 00:16:30.142 "name": "BaseBdev3", 00:16:30.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.142 "is_configured": false, 00:16:30.142 "data_offset": 0, 00:16:30.142 "data_size": 0 00:16:30.142 } 00:16:30.142 ] 00:16:30.142 }' 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.142 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 [2024-11-20 14:33:31.544982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.709 [2024-11-20 14:33:31.545107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 [2024-11-20 14:33:31.553048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.709 [2024-11-20 14:33:31.557937] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.709 [2024-11-20 14:33:31.558010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.709 [2024-11-20 14:33:31.558038] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.709 [2024-11-20 14:33:31.558063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.709 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.710 "name": "Existed_Raid", 00:16:30.710 "uuid": "2ae6cf39-6e03-4d28-81f9-c3fccb125527", 00:16:30.710 "strip_size_kb": 64, 00:16:30.710 "state": "configuring", 00:16:30.710 "raid_level": "raid5f", 00:16:30.710 "superblock": true, 00:16:30.710 "num_base_bdevs": 3, 00:16:30.710 "num_base_bdevs_discovered": 1, 00:16:30.710 "num_base_bdevs_operational": 3, 00:16:30.710 "base_bdevs_list": [ 00:16:30.710 { 00:16:30.710 "name": "BaseBdev1", 00:16:30.710 "uuid": "b7864b0a-b747-4980-8509-3af5d8c63779", 00:16:30.710 "is_configured": true, 00:16:30.710 "data_offset": 2048, 00:16:30.710 "data_size": 63488 00:16:30.710 }, 00:16:30.710 { 00:16:30.710 "name": "BaseBdev2", 00:16:30.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.710 "is_configured": false, 00:16:30.710 "data_offset": 0, 00:16:30.710 "data_size": 0 00:16:30.710 }, 00:16:30.710 { 00:16:30.710 "name": "BaseBdev3", 00:16:30.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.710 "is_configured": false, 00:16:30.710 "data_offset": 0, 00:16:30.710 "data_size": 0 00:16:30.710 } 00:16:30.710 ] 00:16:30.710 }' 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.710 14:33:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.277 [2024-11-20 14:33:32.094347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.277 BaseBdev2 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.277 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.278 [ 00:16:31.278 { 00:16:31.278 "name": "BaseBdev2", 00:16:31.278 "aliases": [ 00:16:31.278 "a286e383-1389-401d-af6b-05efee588d0b" 00:16:31.278 ], 00:16:31.278 "product_name": "Malloc disk", 00:16:31.278 "block_size": 512, 00:16:31.278 "num_blocks": 65536, 00:16:31.278 "uuid": "a286e383-1389-401d-af6b-05efee588d0b", 00:16:31.278 "assigned_rate_limits": { 00:16:31.278 "rw_ios_per_sec": 0, 00:16:31.278 "rw_mbytes_per_sec": 0, 00:16:31.278 "r_mbytes_per_sec": 0, 00:16:31.278 "w_mbytes_per_sec": 0 00:16:31.278 }, 00:16:31.278 "claimed": true, 00:16:31.278 "claim_type": "exclusive_write", 00:16:31.278 "zoned": false, 00:16:31.278 "supported_io_types": { 00:16:31.278 "read": true, 00:16:31.278 "write": true, 00:16:31.278 "unmap": true, 00:16:31.278 "flush": true, 00:16:31.278 "reset": true, 00:16:31.278 "nvme_admin": false, 00:16:31.278 "nvme_io": false, 00:16:31.278 "nvme_io_md": false, 00:16:31.278 "write_zeroes": true, 00:16:31.278 "zcopy": true, 00:16:31.278 "get_zone_info": false, 00:16:31.278 "zone_management": false, 00:16:31.278 "zone_append": false, 00:16:31.278 "compare": false, 00:16:31.278 "compare_and_write": false, 00:16:31.278 "abort": true, 00:16:31.278 "seek_hole": false, 00:16:31.278 "seek_data": false, 00:16:31.278 "copy": true, 00:16:31.278 "nvme_iov_md": false 00:16:31.278 }, 00:16:31.278 "memory_domains": [ 00:16:31.278 { 00:16:31.278 "dma_device_id": "system", 00:16:31.278 "dma_device_type": 1 00:16:31.278 }, 00:16:31.278 { 00:16:31.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.278 "dma_device_type": 2 00:16:31.278 } 00:16:31.278 ], 00:16:31.278 "driver_specific": {} 00:16:31.278 } 00:16:31.278 ] 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.278 "name": "Existed_Raid", 00:16:31.278 "uuid": "2ae6cf39-6e03-4d28-81f9-c3fccb125527", 00:16:31.278 "strip_size_kb": 64, 00:16:31.278 "state": "configuring", 00:16:31.278 "raid_level": "raid5f", 00:16:31.278 "superblock": true, 00:16:31.278 "num_base_bdevs": 3, 00:16:31.278 "num_base_bdevs_discovered": 2, 00:16:31.278 "num_base_bdevs_operational": 3, 00:16:31.278 "base_bdevs_list": [ 00:16:31.278 { 00:16:31.278 "name": "BaseBdev1", 00:16:31.278 "uuid": "b7864b0a-b747-4980-8509-3af5d8c63779", 00:16:31.278 "is_configured": true, 00:16:31.278 "data_offset": 2048, 00:16:31.278 "data_size": 63488 00:16:31.278 }, 00:16:31.278 { 00:16:31.278 "name": "BaseBdev2", 00:16:31.278 "uuid": "a286e383-1389-401d-af6b-05efee588d0b", 00:16:31.278 "is_configured": true, 00:16:31.278 "data_offset": 2048, 00:16:31.278 "data_size": 63488 00:16:31.278 }, 00:16:31.278 { 00:16:31.278 "name": "BaseBdev3", 00:16:31.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.278 "is_configured": false, 00:16:31.278 "data_offset": 0, 00:16:31.278 "data_size": 0 00:16:31.278 } 00:16:31.278 ] 00:16:31.278 }' 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.278 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.844 [2024-11-20 14:33:32.672549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.844 [2024-11-20 14:33:32.672981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:31.844 [2024-11-20 14:33:32.673026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:31.844 BaseBdev3 00:16:31.844 [2024-11-20 14:33:32.673544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.844 [2024-11-20 14:33:32.679442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:31.844 [2024-11-20 14:33:32.679603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:31.844 [2024-11-20 14:33:32.680152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.844 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.844 [ 00:16:31.844 { 00:16:31.844 "name": "BaseBdev3", 00:16:31.844 "aliases": [ 00:16:31.845 "c5463af4-5c0b-43a1-bab2-afcc705f231f" 00:16:31.845 ], 00:16:31.845 "product_name": "Malloc disk", 00:16:31.845 "block_size": 512, 00:16:31.845 "num_blocks": 65536, 00:16:31.845 "uuid": "c5463af4-5c0b-43a1-bab2-afcc705f231f", 00:16:31.845 "assigned_rate_limits": { 00:16:31.845 "rw_ios_per_sec": 0, 00:16:31.845 "rw_mbytes_per_sec": 0, 00:16:31.845 "r_mbytes_per_sec": 0, 00:16:31.845 "w_mbytes_per_sec": 0 00:16:31.845 }, 00:16:31.845 "claimed": true, 00:16:31.845 "claim_type": "exclusive_write", 00:16:31.845 "zoned": false, 00:16:31.845 "supported_io_types": { 00:16:31.845 "read": true, 00:16:31.845 "write": true, 00:16:31.845 "unmap": true, 00:16:31.845 "flush": true, 00:16:31.845 "reset": true, 00:16:31.845 "nvme_admin": false, 00:16:31.845 "nvme_io": false, 00:16:31.845 "nvme_io_md": false, 00:16:31.845 "write_zeroes": true, 00:16:31.845 "zcopy": true, 00:16:31.845 "get_zone_info": false, 00:16:31.845 "zone_management": false, 00:16:31.845 "zone_append": false, 00:16:31.845 "compare": false, 00:16:31.845 "compare_and_write": false, 00:16:31.845 "abort": true, 00:16:31.845 "seek_hole": false, 00:16:31.845 "seek_data": false, 00:16:31.845 "copy": true, 00:16:31.845 "nvme_iov_md": false 00:16:31.845 }, 00:16:31.845 "memory_domains": [ 00:16:31.845 { 00:16:31.845 "dma_device_id": "system", 00:16:31.845 "dma_device_type": 1 00:16:31.845 }, 00:16:31.845 { 00:16:31.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.845 "dma_device_type": 2 00:16:31.845 } 00:16:31.845 ], 00:16:31.845 "driver_specific": {} 00:16:31.845 } 00:16:31.845 ] 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.845 "name": "Existed_Raid", 00:16:31.845 "uuid": "2ae6cf39-6e03-4d28-81f9-c3fccb125527", 00:16:31.845 "strip_size_kb": 64, 00:16:31.845 "state": "online", 00:16:31.845 "raid_level": "raid5f", 00:16:31.845 "superblock": true, 00:16:31.845 "num_base_bdevs": 3, 00:16:31.845 "num_base_bdevs_discovered": 3, 00:16:31.845 "num_base_bdevs_operational": 3, 00:16:31.845 "base_bdevs_list": [ 00:16:31.845 { 00:16:31.845 "name": "BaseBdev1", 00:16:31.845 "uuid": "b7864b0a-b747-4980-8509-3af5d8c63779", 00:16:31.845 "is_configured": true, 00:16:31.845 "data_offset": 2048, 00:16:31.845 "data_size": 63488 00:16:31.845 }, 00:16:31.845 { 00:16:31.845 "name": "BaseBdev2", 00:16:31.845 "uuid": "a286e383-1389-401d-af6b-05efee588d0b", 00:16:31.845 "is_configured": true, 00:16:31.845 "data_offset": 2048, 00:16:31.845 "data_size": 63488 00:16:31.845 }, 00:16:31.845 { 00:16:31.845 "name": "BaseBdev3", 00:16:31.845 "uuid": "c5463af4-5c0b-43a1-bab2-afcc705f231f", 00:16:31.845 "is_configured": true, 00:16:31.845 "data_offset": 2048, 00:16:31.845 "data_size": 63488 00:16:31.845 } 00:16:31.845 ] 00:16:31.845 }' 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.845 14:33:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.411 [2024-11-20 14:33:33.238821] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.411 "name": "Existed_Raid", 00:16:32.411 "aliases": [ 00:16:32.411 "2ae6cf39-6e03-4d28-81f9-c3fccb125527" 00:16:32.411 ], 00:16:32.411 "product_name": "Raid Volume", 00:16:32.411 "block_size": 512, 00:16:32.411 "num_blocks": 126976, 00:16:32.411 "uuid": "2ae6cf39-6e03-4d28-81f9-c3fccb125527", 00:16:32.411 "assigned_rate_limits": { 00:16:32.411 "rw_ios_per_sec": 0, 00:16:32.411 "rw_mbytes_per_sec": 0, 00:16:32.411 "r_mbytes_per_sec": 0, 00:16:32.411 "w_mbytes_per_sec": 0 00:16:32.411 }, 00:16:32.411 "claimed": false, 00:16:32.411 "zoned": false, 00:16:32.411 "supported_io_types": { 00:16:32.411 "read": true, 00:16:32.411 "write": true, 00:16:32.411 "unmap": false, 00:16:32.411 "flush": false, 00:16:32.411 "reset": true, 00:16:32.411 "nvme_admin": false, 00:16:32.411 "nvme_io": false, 00:16:32.411 "nvme_io_md": false, 00:16:32.411 "write_zeroes": true, 00:16:32.411 "zcopy": false, 00:16:32.411 "get_zone_info": false, 00:16:32.411 "zone_management": false, 00:16:32.411 "zone_append": false, 00:16:32.411 "compare": false, 00:16:32.411 "compare_and_write": false, 00:16:32.411 "abort": false, 00:16:32.411 "seek_hole": false, 00:16:32.411 "seek_data": false, 00:16:32.411 "copy": false, 00:16:32.411 "nvme_iov_md": false 00:16:32.411 }, 00:16:32.411 "driver_specific": { 00:16:32.411 "raid": { 00:16:32.411 "uuid": "2ae6cf39-6e03-4d28-81f9-c3fccb125527", 00:16:32.411 "strip_size_kb": 64, 00:16:32.411 "state": "online", 00:16:32.411 "raid_level": "raid5f", 00:16:32.411 "superblock": true, 00:16:32.411 "num_base_bdevs": 3, 00:16:32.411 "num_base_bdevs_discovered": 3, 00:16:32.411 "num_base_bdevs_operational": 3, 00:16:32.411 "base_bdevs_list": [ 00:16:32.411 { 00:16:32.411 "name": "BaseBdev1", 00:16:32.411 "uuid": "b7864b0a-b747-4980-8509-3af5d8c63779", 00:16:32.411 "is_configured": true, 00:16:32.411 "data_offset": 2048, 00:16:32.411 "data_size": 63488 00:16:32.411 }, 00:16:32.411 { 00:16:32.411 "name": "BaseBdev2", 00:16:32.411 "uuid": "a286e383-1389-401d-af6b-05efee588d0b", 00:16:32.411 "is_configured": true, 00:16:32.411 "data_offset": 2048, 00:16:32.411 "data_size": 63488 00:16:32.411 }, 00:16:32.411 { 00:16:32.411 "name": "BaseBdev3", 00:16:32.411 "uuid": "c5463af4-5c0b-43a1-bab2-afcc705f231f", 00:16:32.411 "is_configured": true, 00:16:32.411 "data_offset": 2048, 00:16:32.411 "data_size": 63488 00:16:32.411 } 00:16:32.411 ] 00:16:32.411 } 00:16:32.411 } 00:16:32.411 }' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:32.411 BaseBdev2 00:16:32.411 BaseBdev3' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.411 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.670 [2024-11-20 14:33:33.550619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.670 "name": "Existed_Raid", 00:16:32.670 "uuid": "2ae6cf39-6e03-4d28-81f9-c3fccb125527", 00:16:32.670 "strip_size_kb": 64, 00:16:32.670 "state": "online", 00:16:32.670 "raid_level": "raid5f", 00:16:32.670 "superblock": true, 00:16:32.670 "num_base_bdevs": 3, 00:16:32.670 "num_base_bdevs_discovered": 2, 00:16:32.670 "num_base_bdevs_operational": 2, 00:16:32.670 "base_bdevs_list": [ 00:16:32.670 { 00:16:32.670 "name": null, 00:16:32.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.670 "is_configured": false, 00:16:32.670 "data_offset": 0, 00:16:32.670 "data_size": 63488 00:16:32.670 }, 00:16:32.670 { 00:16:32.670 "name": "BaseBdev2", 00:16:32.670 "uuid": "a286e383-1389-401d-af6b-05efee588d0b", 00:16:32.670 "is_configured": true, 00:16:32.670 "data_offset": 2048, 00:16:32.670 "data_size": 63488 00:16:32.670 }, 00:16:32.670 { 00:16:32.670 "name": "BaseBdev3", 00:16:32.670 "uuid": "c5463af4-5c0b-43a1-bab2-afcc705f231f", 00:16:32.670 "is_configured": true, 00:16:32.670 "data_offset": 2048, 00:16:32.670 "data_size": 63488 00:16:32.670 } 00:16:32.670 ] 00:16:32.670 }' 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.670 14:33:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.236 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.236 [2024-11-20 14:33:34.225016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:33.236 [2024-11-20 14:33:34.225269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.495 [2024-11-20 14:33:34.315965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.495 [2024-11-20 14:33:34.376036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:33.495 [2024-11-20 14:33:34.376129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.495 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.754 BaseBdev2 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.754 [ 00:16:33.754 { 00:16:33.754 "name": "BaseBdev2", 00:16:33.754 "aliases": [ 00:16:33.754 "faaee65b-c4ef-4c0f-bde8-463b307e6f3e" 00:16:33.754 ], 00:16:33.754 "product_name": "Malloc disk", 00:16:33.754 "block_size": 512, 00:16:33.754 "num_blocks": 65536, 00:16:33.754 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:33.754 "assigned_rate_limits": { 00:16:33.754 "rw_ios_per_sec": 0, 00:16:33.754 "rw_mbytes_per_sec": 0, 00:16:33.754 "r_mbytes_per_sec": 0, 00:16:33.754 "w_mbytes_per_sec": 0 00:16:33.754 }, 00:16:33.754 "claimed": false, 00:16:33.754 "zoned": false, 00:16:33.754 "supported_io_types": { 00:16:33.754 "read": true, 00:16:33.754 "write": true, 00:16:33.754 "unmap": true, 00:16:33.754 "flush": true, 00:16:33.754 "reset": true, 00:16:33.754 "nvme_admin": false, 00:16:33.754 "nvme_io": false, 00:16:33.754 "nvme_io_md": false, 00:16:33.754 "write_zeroes": true, 00:16:33.754 "zcopy": true, 00:16:33.754 "get_zone_info": false, 00:16:33.754 "zone_management": false, 00:16:33.754 "zone_append": false, 00:16:33.754 "compare": false, 00:16:33.754 "compare_and_write": false, 00:16:33.754 "abort": true, 00:16:33.754 "seek_hole": false, 00:16:33.754 "seek_data": false, 00:16:33.754 "copy": true, 00:16:33.754 "nvme_iov_md": false 00:16:33.754 }, 00:16:33.754 "memory_domains": [ 00:16:33.754 { 00:16:33.754 "dma_device_id": "system", 00:16:33.754 "dma_device_type": 1 00:16:33.754 }, 00:16:33.754 { 00:16:33.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.754 "dma_device_type": 2 00:16:33.754 } 00:16:33.754 ], 00:16:33.754 "driver_specific": {} 00:16:33.754 } 00:16:33.754 ] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.754 BaseBdev3 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.754 [ 00:16:33.754 { 00:16:33.754 "name": "BaseBdev3", 00:16:33.754 "aliases": [ 00:16:33.754 "f0f4f157-64c7-479c-a35b-d1fc57a7d013" 00:16:33.754 ], 00:16:33.754 "product_name": "Malloc disk", 00:16:33.754 "block_size": 512, 00:16:33.754 "num_blocks": 65536, 00:16:33.754 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:33.754 "assigned_rate_limits": { 00:16:33.754 "rw_ios_per_sec": 0, 00:16:33.754 "rw_mbytes_per_sec": 0, 00:16:33.754 "r_mbytes_per_sec": 0, 00:16:33.754 "w_mbytes_per_sec": 0 00:16:33.754 }, 00:16:33.754 "claimed": false, 00:16:33.754 "zoned": false, 00:16:33.754 "supported_io_types": { 00:16:33.754 "read": true, 00:16:33.754 "write": true, 00:16:33.754 "unmap": true, 00:16:33.754 "flush": true, 00:16:33.754 "reset": true, 00:16:33.754 "nvme_admin": false, 00:16:33.754 "nvme_io": false, 00:16:33.754 "nvme_io_md": false, 00:16:33.754 "write_zeroes": true, 00:16:33.754 "zcopy": true, 00:16:33.754 "get_zone_info": false, 00:16:33.754 "zone_management": false, 00:16:33.754 "zone_append": false, 00:16:33.754 "compare": false, 00:16:33.754 "compare_and_write": false, 00:16:33.754 "abort": true, 00:16:33.754 "seek_hole": false, 00:16:33.754 "seek_data": false, 00:16:33.754 "copy": true, 00:16:33.754 "nvme_iov_md": false 00:16:33.754 }, 00:16:33.754 "memory_domains": [ 00:16:33.754 { 00:16:33.754 "dma_device_id": "system", 00:16:33.754 "dma_device_type": 1 00:16:33.754 }, 00:16:33.754 { 00:16:33.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.754 "dma_device_type": 2 00:16:33.754 } 00:16:33.754 ], 00:16:33.754 "driver_specific": {} 00:16:33.754 } 00:16:33.754 ] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.754 [2024-11-20 14:33:34.678263] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.754 [2024-11-20 14:33:34.678341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.754 [2024-11-20 14:33:34.678382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.754 [2024-11-20 14:33:34.680956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.754 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.755 "name": "Existed_Raid", 00:16:33.755 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:33.755 "strip_size_kb": 64, 00:16:33.755 "state": "configuring", 00:16:33.755 "raid_level": "raid5f", 00:16:33.755 "superblock": true, 00:16:33.755 "num_base_bdevs": 3, 00:16:33.755 "num_base_bdevs_discovered": 2, 00:16:33.755 "num_base_bdevs_operational": 3, 00:16:33.755 "base_bdevs_list": [ 00:16:33.755 { 00:16:33.755 "name": "BaseBdev1", 00:16:33.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.755 "is_configured": false, 00:16:33.755 "data_offset": 0, 00:16:33.755 "data_size": 0 00:16:33.755 }, 00:16:33.755 { 00:16:33.755 "name": "BaseBdev2", 00:16:33.755 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:33.755 "is_configured": true, 00:16:33.755 "data_offset": 2048, 00:16:33.755 "data_size": 63488 00:16:33.755 }, 00:16:33.755 { 00:16:33.755 "name": "BaseBdev3", 00:16:33.755 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:33.755 "is_configured": true, 00:16:33.755 "data_offset": 2048, 00:16:33.755 "data_size": 63488 00:16:33.755 } 00:16:33.755 ] 00:16:33.755 }' 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.755 14:33:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.320 [2024-11-20 14:33:35.202492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.320 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.321 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.321 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.321 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.321 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.321 "name": "Existed_Raid", 00:16:34.321 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:34.321 "strip_size_kb": 64, 00:16:34.321 "state": "configuring", 00:16:34.321 "raid_level": "raid5f", 00:16:34.321 "superblock": true, 00:16:34.321 "num_base_bdevs": 3, 00:16:34.321 "num_base_bdevs_discovered": 1, 00:16:34.321 "num_base_bdevs_operational": 3, 00:16:34.321 "base_bdevs_list": [ 00:16:34.321 { 00:16:34.321 "name": "BaseBdev1", 00:16:34.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.321 "is_configured": false, 00:16:34.321 "data_offset": 0, 00:16:34.321 "data_size": 0 00:16:34.321 }, 00:16:34.321 { 00:16:34.321 "name": null, 00:16:34.321 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:34.321 "is_configured": false, 00:16:34.321 "data_offset": 0, 00:16:34.321 "data_size": 63488 00:16:34.321 }, 00:16:34.321 { 00:16:34.321 "name": "BaseBdev3", 00:16:34.321 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:34.321 "is_configured": true, 00:16:34.321 "data_offset": 2048, 00:16:34.321 "data_size": 63488 00:16:34.321 } 00:16:34.321 ] 00:16:34.321 }' 00:16:34.321 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.321 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.886 [2024-11-20 14:33:35.780533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.886 BaseBdev1 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:34.886 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.887 [ 00:16:34.887 { 00:16:34.887 "name": "BaseBdev1", 00:16:34.887 "aliases": [ 00:16:34.887 "178eb624-6da5-4e57-bb23-bd2f5ee6630e" 00:16:34.887 ], 00:16:34.887 "product_name": "Malloc disk", 00:16:34.887 "block_size": 512, 00:16:34.887 "num_blocks": 65536, 00:16:34.887 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:34.887 "assigned_rate_limits": { 00:16:34.887 "rw_ios_per_sec": 0, 00:16:34.887 "rw_mbytes_per_sec": 0, 00:16:34.887 "r_mbytes_per_sec": 0, 00:16:34.887 "w_mbytes_per_sec": 0 00:16:34.887 }, 00:16:34.887 "claimed": true, 00:16:34.887 "claim_type": "exclusive_write", 00:16:34.887 "zoned": false, 00:16:34.887 "supported_io_types": { 00:16:34.887 "read": true, 00:16:34.887 "write": true, 00:16:34.887 "unmap": true, 00:16:34.887 "flush": true, 00:16:34.887 "reset": true, 00:16:34.887 "nvme_admin": false, 00:16:34.887 "nvme_io": false, 00:16:34.887 "nvme_io_md": false, 00:16:34.887 "write_zeroes": true, 00:16:34.887 "zcopy": true, 00:16:34.887 "get_zone_info": false, 00:16:34.887 "zone_management": false, 00:16:34.887 "zone_append": false, 00:16:34.887 "compare": false, 00:16:34.887 "compare_and_write": false, 00:16:34.887 "abort": true, 00:16:34.887 "seek_hole": false, 00:16:34.887 "seek_data": false, 00:16:34.887 "copy": true, 00:16:34.887 "nvme_iov_md": false 00:16:34.887 }, 00:16:34.887 "memory_domains": [ 00:16:34.887 { 00:16:34.887 "dma_device_id": "system", 00:16:34.887 "dma_device_type": 1 00:16:34.887 }, 00:16:34.887 { 00:16:34.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.887 "dma_device_type": 2 00:16:34.887 } 00:16:34.887 ], 00:16:34.887 "driver_specific": {} 00:16:34.887 } 00:16:34.887 ] 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.887 "name": "Existed_Raid", 00:16:34.887 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:34.887 "strip_size_kb": 64, 00:16:34.887 "state": "configuring", 00:16:34.887 "raid_level": "raid5f", 00:16:34.887 "superblock": true, 00:16:34.887 "num_base_bdevs": 3, 00:16:34.887 "num_base_bdevs_discovered": 2, 00:16:34.887 "num_base_bdevs_operational": 3, 00:16:34.887 "base_bdevs_list": [ 00:16:34.887 { 00:16:34.887 "name": "BaseBdev1", 00:16:34.887 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:34.887 "is_configured": true, 00:16:34.887 "data_offset": 2048, 00:16:34.887 "data_size": 63488 00:16:34.887 }, 00:16:34.887 { 00:16:34.887 "name": null, 00:16:34.887 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:34.887 "is_configured": false, 00:16:34.887 "data_offset": 0, 00:16:34.887 "data_size": 63488 00:16:34.887 }, 00:16:34.887 { 00:16:34.887 "name": "BaseBdev3", 00:16:34.887 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:34.887 "is_configured": true, 00:16:34.887 "data_offset": 2048, 00:16:34.887 "data_size": 63488 00:16:34.887 } 00:16:34.887 ] 00:16:34.887 }' 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.887 14:33:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.453 [2024-11-20 14:33:36.372739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.453 "name": "Existed_Raid", 00:16:35.453 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:35.453 "strip_size_kb": 64, 00:16:35.453 "state": "configuring", 00:16:35.453 "raid_level": "raid5f", 00:16:35.453 "superblock": true, 00:16:35.453 "num_base_bdevs": 3, 00:16:35.453 "num_base_bdevs_discovered": 1, 00:16:35.453 "num_base_bdevs_operational": 3, 00:16:35.453 "base_bdevs_list": [ 00:16:35.453 { 00:16:35.453 "name": "BaseBdev1", 00:16:35.453 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:35.453 "is_configured": true, 00:16:35.453 "data_offset": 2048, 00:16:35.453 "data_size": 63488 00:16:35.453 }, 00:16:35.453 { 00:16:35.453 "name": null, 00:16:35.453 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:35.453 "is_configured": false, 00:16:35.453 "data_offset": 0, 00:16:35.453 "data_size": 63488 00:16:35.453 }, 00:16:35.453 { 00:16:35.453 "name": null, 00:16:35.453 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:35.453 "is_configured": false, 00:16:35.453 "data_offset": 0, 00:16:35.453 "data_size": 63488 00:16:35.453 } 00:16:35.453 ] 00:16:35.453 }' 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.453 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.019 [2024-11-20 14:33:36.961113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.019 14:33:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.019 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.019 "name": "Existed_Raid", 00:16:36.019 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:36.019 "strip_size_kb": 64, 00:16:36.019 "state": "configuring", 00:16:36.019 "raid_level": "raid5f", 00:16:36.019 "superblock": true, 00:16:36.019 "num_base_bdevs": 3, 00:16:36.019 "num_base_bdevs_discovered": 2, 00:16:36.019 "num_base_bdevs_operational": 3, 00:16:36.019 "base_bdevs_list": [ 00:16:36.019 { 00:16:36.019 "name": "BaseBdev1", 00:16:36.019 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:36.019 "is_configured": true, 00:16:36.019 "data_offset": 2048, 00:16:36.019 "data_size": 63488 00:16:36.019 }, 00:16:36.019 { 00:16:36.019 "name": null, 00:16:36.019 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:36.019 "is_configured": false, 00:16:36.019 "data_offset": 0, 00:16:36.019 "data_size": 63488 00:16:36.019 }, 00:16:36.019 { 00:16:36.019 "name": "BaseBdev3", 00:16:36.020 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:36.020 "is_configured": true, 00:16:36.020 "data_offset": 2048, 00:16:36.020 "data_size": 63488 00:16:36.020 } 00:16:36.020 ] 00:16:36.020 }' 00:16:36.020 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.020 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 [2024-11-20 14:33:37.521214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.585 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.842 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.842 "name": "Existed_Raid", 00:16:36.842 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:36.842 "strip_size_kb": 64, 00:16:36.842 "state": "configuring", 00:16:36.842 "raid_level": "raid5f", 00:16:36.842 "superblock": true, 00:16:36.842 "num_base_bdevs": 3, 00:16:36.842 "num_base_bdevs_discovered": 1, 00:16:36.842 "num_base_bdevs_operational": 3, 00:16:36.842 "base_bdevs_list": [ 00:16:36.842 { 00:16:36.842 "name": null, 00:16:36.842 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:36.842 "is_configured": false, 00:16:36.842 "data_offset": 0, 00:16:36.842 "data_size": 63488 00:16:36.842 }, 00:16:36.842 { 00:16:36.842 "name": null, 00:16:36.842 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:36.842 "is_configured": false, 00:16:36.842 "data_offset": 0, 00:16:36.842 "data_size": 63488 00:16:36.842 }, 00:16:36.842 { 00:16:36.842 "name": "BaseBdev3", 00:16:36.842 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:36.842 "is_configured": true, 00:16:36.842 "data_offset": 2048, 00:16:36.842 "data_size": 63488 00:16:36.842 } 00:16:36.842 ] 00:16:36.842 }' 00:16:36.842 14:33:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.842 14:33:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.111 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.111 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.111 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.111 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.111 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.398 [2024-11-20 14:33:38.186555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.398 "name": "Existed_Raid", 00:16:37.398 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:37.398 "strip_size_kb": 64, 00:16:37.398 "state": "configuring", 00:16:37.398 "raid_level": "raid5f", 00:16:37.398 "superblock": true, 00:16:37.398 "num_base_bdevs": 3, 00:16:37.398 "num_base_bdevs_discovered": 2, 00:16:37.398 "num_base_bdevs_operational": 3, 00:16:37.398 "base_bdevs_list": [ 00:16:37.398 { 00:16:37.398 "name": null, 00:16:37.398 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:37.398 "is_configured": false, 00:16:37.398 "data_offset": 0, 00:16:37.398 "data_size": 63488 00:16:37.398 }, 00:16:37.398 { 00:16:37.398 "name": "BaseBdev2", 00:16:37.398 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:37.398 "is_configured": true, 00:16:37.398 "data_offset": 2048, 00:16:37.398 "data_size": 63488 00:16:37.398 }, 00:16:37.398 { 00:16:37.398 "name": "BaseBdev3", 00:16:37.398 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:37.398 "is_configured": true, 00:16:37.398 "data_offset": 2048, 00:16:37.398 "data_size": 63488 00:16:37.398 } 00:16:37.398 ] 00:16:37.398 }' 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.398 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.657 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.657 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:37.657 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.657 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.657 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.657 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 178eb624-6da5-4e57-bb23-bd2f5ee6630e 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.916 [2024-11-20 14:33:38.793012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:37.916 [2024-11-20 14:33:38.793401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:37.916 [2024-11-20 14:33:38.793440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:37.916 NewBaseBdev 00:16:37.916 [2024-11-20 14:33:38.793818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.916 [2024-11-20 14:33:38.798839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:37.916 [2024-11-20 14:33:38.798871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:37.916 [2024-11-20 14:33:38.799254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.916 [ 00:16:37.916 { 00:16:37.916 "name": "NewBaseBdev", 00:16:37.916 "aliases": [ 00:16:37.916 "178eb624-6da5-4e57-bb23-bd2f5ee6630e" 00:16:37.916 ], 00:16:37.916 "product_name": "Malloc disk", 00:16:37.916 "block_size": 512, 00:16:37.916 "num_blocks": 65536, 00:16:37.916 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:37.916 "assigned_rate_limits": { 00:16:37.916 "rw_ios_per_sec": 0, 00:16:37.916 "rw_mbytes_per_sec": 0, 00:16:37.916 "r_mbytes_per_sec": 0, 00:16:37.916 "w_mbytes_per_sec": 0 00:16:37.916 }, 00:16:37.916 "claimed": true, 00:16:37.916 "claim_type": "exclusive_write", 00:16:37.916 "zoned": false, 00:16:37.916 "supported_io_types": { 00:16:37.916 "read": true, 00:16:37.916 "write": true, 00:16:37.916 "unmap": true, 00:16:37.916 "flush": true, 00:16:37.916 "reset": true, 00:16:37.916 "nvme_admin": false, 00:16:37.916 "nvme_io": false, 00:16:37.916 "nvme_io_md": false, 00:16:37.916 "write_zeroes": true, 00:16:37.916 "zcopy": true, 00:16:37.916 "get_zone_info": false, 00:16:37.916 "zone_management": false, 00:16:37.916 "zone_append": false, 00:16:37.916 "compare": false, 00:16:37.916 "compare_and_write": false, 00:16:37.916 "abort": true, 00:16:37.916 "seek_hole": false, 00:16:37.916 "seek_data": false, 00:16:37.916 "copy": true, 00:16:37.916 "nvme_iov_md": false 00:16:37.916 }, 00:16:37.916 "memory_domains": [ 00:16:37.916 { 00:16:37.916 "dma_device_id": "system", 00:16:37.916 "dma_device_type": 1 00:16:37.916 }, 00:16:37.916 { 00:16:37.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.916 "dma_device_type": 2 00:16:37.916 } 00:16:37.916 ], 00:16:37.916 "driver_specific": {} 00:16:37.916 } 00:16:37.916 ] 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.916 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.917 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.917 "name": "Existed_Raid", 00:16:37.917 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:37.917 "strip_size_kb": 64, 00:16:37.917 "state": "online", 00:16:37.917 "raid_level": "raid5f", 00:16:37.917 "superblock": true, 00:16:37.917 "num_base_bdevs": 3, 00:16:37.917 "num_base_bdevs_discovered": 3, 00:16:37.917 "num_base_bdevs_operational": 3, 00:16:37.917 "base_bdevs_list": [ 00:16:37.917 { 00:16:37.917 "name": "NewBaseBdev", 00:16:37.917 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:37.917 "is_configured": true, 00:16:37.917 "data_offset": 2048, 00:16:37.917 "data_size": 63488 00:16:37.917 }, 00:16:37.917 { 00:16:37.917 "name": "BaseBdev2", 00:16:37.917 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:37.917 "is_configured": true, 00:16:37.917 "data_offset": 2048, 00:16:37.917 "data_size": 63488 00:16:37.917 }, 00:16:37.917 { 00:16:37.917 "name": "BaseBdev3", 00:16:37.917 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:37.917 "is_configured": true, 00:16:37.917 "data_offset": 2048, 00:16:37.917 "data_size": 63488 00:16:37.917 } 00:16:37.917 ] 00:16:37.917 }' 00:16:37.917 14:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.917 14:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.484 [2024-11-20 14:33:39.313739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.484 "name": "Existed_Raid", 00:16:38.484 "aliases": [ 00:16:38.484 "d372c88e-c824-445c-ae1a-61d4c293ac8d" 00:16:38.484 ], 00:16:38.484 "product_name": "Raid Volume", 00:16:38.484 "block_size": 512, 00:16:38.484 "num_blocks": 126976, 00:16:38.484 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:38.484 "assigned_rate_limits": { 00:16:38.484 "rw_ios_per_sec": 0, 00:16:38.484 "rw_mbytes_per_sec": 0, 00:16:38.484 "r_mbytes_per_sec": 0, 00:16:38.484 "w_mbytes_per_sec": 0 00:16:38.484 }, 00:16:38.484 "claimed": false, 00:16:38.484 "zoned": false, 00:16:38.484 "supported_io_types": { 00:16:38.484 "read": true, 00:16:38.484 "write": true, 00:16:38.484 "unmap": false, 00:16:38.484 "flush": false, 00:16:38.484 "reset": true, 00:16:38.484 "nvme_admin": false, 00:16:38.484 "nvme_io": false, 00:16:38.484 "nvme_io_md": false, 00:16:38.484 "write_zeroes": true, 00:16:38.484 "zcopy": false, 00:16:38.484 "get_zone_info": false, 00:16:38.484 "zone_management": false, 00:16:38.484 "zone_append": false, 00:16:38.484 "compare": false, 00:16:38.484 "compare_and_write": false, 00:16:38.484 "abort": false, 00:16:38.484 "seek_hole": false, 00:16:38.484 "seek_data": false, 00:16:38.484 "copy": false, 00:16:38.484 "nvme_iov_md": false 00:16:38.484 }, 00:16:38.484 "driver_specific": { 00:16:38.484 "raid": { 00:16:38.484 "uuid": "d372c88e-c824-445c-ae1a-61d4c293ac8d", 00:16:38.484 "strip_size_kb": 64, 00:16:38.484 "state": "online", 00:16:38.484 "raid_level": "raid5f", 00:16:38.484 "superblock": true, 00:16:38.484 "num_base_bdevs": 3, 00:16:38.484 "num_base_bdevs_discovered": 3, 00:16:38.484 "num_base_bdevs_operational": 3, 00:16:38.484 "base_bdevs_list": [ 00:16:38.484 { 00:16:38.484 "name": "NewBaseBdev", 00:16:38.484 "uuid": "178eb624-6da5-4e57-bb23-bd2f5ee6630e", 00:16:38.484 "is_configured": true, 00:16:38.484 "data_offset": 2048, 00:16:38.484 "data_size": 63488 00:16:38.484 }, 00:16:38.484 { 00:16:38.484 "name": "BaseBdev2", 00:16:38.484 "uuid": "faaee65b-c4ef-4c0f-bde8-463b307e6f3e", 00:16:38.484 "is_configured": true, 00:16:38.484 "data_offset": 2048, 00:16:38.484 "data_size": 63488 00:16:38.484 }, 00:16:38.484 { 00:16:38.484 "name": "BaseBdev3", 00:16:38.484 "uuid": "f0f4f157-64c7-479c-a35b-d1fc57a7d013", 00:16:38.484 "is_configured": true, 00:16:38.484 "data_offset": 2048, 00:16:38.484 "data_size": 63488 00:16:38.484 } 00:16:38.484 ] 00:16:38.484 } 00:16:38.484 } 00:16:38.484 }' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:38.484 BaseBdev2 00:16:38.484 BaseBdev3' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.484 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.742 [2024-11-20 14:33:39.593527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.742 [2024-11-20 14:33:39.593613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.742 [2024-11-20 14:33:39.593771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.742 [2024-11-20 14:33:39.594197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.742 [2024-11-20 14:33:39.594238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80947 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80947 ']' 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80947 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80947 00:16:38.742 killing process with pid 80947 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80947' 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80947 00:16:38.742 [2024-11-20 14:33:39.634370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.742 14:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80947 00:16:39.000 [2024-11-20 14:33:39.929191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.372 14:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:40.372 00:16:40.372 real 0m11.807s 00:16:40.372 user 0m19.347s 00:16:40.372 sys 0m1.732s 00:16:40.372 14:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.372 14:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.372 ************************************ 00:16:40.372 END TEST raid5f_state_function_test_sb 00:16:40.372 ************************************ 00:16:40.372 14:33:41 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:40.372 14:33:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:40.372 14:33:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.372 14:33:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.372 ************************************ 00:16:40.372 START TEST raid5f_superblock_test 00:16:40.372 ************************************ 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:40.372 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81581 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81581 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81581 ']' 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.373 14:33:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.373 [2024-11-20 14:33:41.252804] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:16:40.373 [2024-11-20 14:33:41.253001] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81581 ] 00:16:40.631 [2024-11-20 14:33:41.432529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.631 [2024-11-20 14:33:41.582174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.889 [2024-11-20 14:33:41.806528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.889 [2024-11-20 14:33:41.806601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.456 malloc1 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.456 [2024-11-20 14:33:42.292065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:41.456 [2024-11-20 14:33:42.292138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.456 [2024-11-20 14:33:42.292171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:41.456 [2024-11-20 14:33:42.292188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.456 [2024-11-20 14:33:42.295135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.456 [2024-11-20 14:33:42.295178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:41.456 pt1 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.456 malloc2 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.456 [2024-11-20 14:33:42.348877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.456 [2024-11-20 14:33:42.348946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.456 [2024-11-20 14:33:42.348985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:41.456 [2024-11-20 14:33:42.349000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.456 [2024-11-20 14:33:42.351839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.456 [2024-11-20 14:33:42.351880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.456 pt2 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.456 malloc3 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.456 [2024-11-20 14:33:42.416392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.456 [2024-11-20 14:33:42.416456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.456 [2024-11-20 14:33:42.416490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:41.456 [2024-11-20 14:33:42.416505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.456 [2024-11-20 14:33:42.419374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.456 [2024-11-20 14:33:42.419417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.456 pt3 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.456 [2024-11-20 14:33:42.428478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:41.456 [2024-11-20 14:33:42.431015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.456 [2024-11-20 14:33:42.431124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.456 [2024-11-20 14:33:42.431373] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:41.456 [2024-11-20 14:33:42.431413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:41.456 [2024-11-20 14:33:42.431744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:41.456 [2024-11-20 14:33:42.436917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:41.456 [2024-11-20 14:33:42.436948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:41.456 [2024-11-20 14:33:42.437199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.456 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.457 "name": "raid_bdev1", 00:16:41.457 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:41.457 "strip_size_kb": 64, 00:16:41.457 "state": "online", 00:16:41.457 "raid_level": "raid5f", 00:16:41.457 "superblock": true, 00:16:41.457 "num_base_bdevs": 3, 00:16:41.457 "num_base_bdevs_discovered": 3, 00:16:41.457 "num_base_bdevs_operational": 3, 00:16:41.457 "base_bdevs_list": [ 00:16:41.457 { 00:16:41.457 "name": "pt1", 00:16:41.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:41.457 "is_configured": true, 00:16:41.457 "data_offset": 2048, 00:16:41.457 "data_size": 63488 00:16:41.457 }, 00:16:41.457 { 00:16:41.457 "name": "pt2", 00:16:41.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.457 "is_configured": true, 00:16:41.457 "data_offset": 2048, 00:16:41.457 "data_size": 63488 00:16:41.457 }, 00:16:41.457 { 00:16:41.457 "name": "pt3", 00:16:41.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.457 "is_configured": true, 00:16:41.457 "data_offset": 2048, 00:16:41.457 "data_size": 63488 00:16:41.457 } 00:16:41.457 ] 00:16:41.457 }' 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.457 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.023 [2024-11-20 14:33:42.923278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:42.023 "name": "raid_bdev1", 00:16:42.023 "aliases": [ 00:16:42.023 "31c836cc-2fd0-4096-a044-7d826038e543" 00:16:42.023 ], 00:16:42.023 "product_name": "Raid Volume", 00:16:42.023 "block_size": 512, 00:16:42.023 "num_blocks": 126976, 00:16:42.023 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:42.023 "assigned_rate_limits": { 00:16:42.023 "rw_ios_per_sec": 0, 00:16:42.023 "rw_mbytes_per_sec": 0, 00:16:42.023 "r_mbytes_per_sec": 0, 00:16:42.023 "w_mbytes_per_sec": 0 00:16:42.023 }, 00:16:42.023 "claimed": false, 00:16:42.023 "zoned": false, 00:16:42.023 "supported_io_types": { 00:16:42.023 "read": true, 00:16:42.023 "write": true, 00:16:42.023 "unmap": false, 00:16:42.023 "flush": false, 00:16:42.023 "reset": true, 00:16:42.023 "nvme_admin": false, 00:16:42.023 "nvme_io": false, 00:16:42.023 "nvme_io_md": false, 00:16:42.023 "write_zeroes": true, 00:16:42.023 "zcopy": false, 00:16:42.023 "get_zone_info": false, 00:16:42.023 "zone_management": false, 00:16:42.023 "zone_append": false, 00:16:42.023 "compare": false, 00:16:42.023 "compare_and_write": false, 00:16:42.023 "abort": false, 00:16:42.023 "seek_hole": false, 00:16:42.023 "seek_data": false, 00:16:42.023 "copy": false, 00:16:42.023 "nvme_iov_md": false 00:16:42.023 }, 00:16:42.023 "driver_specific": { 00:16:42.023 "raid": { 00:16:42.023 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:42.023 "strip_size_kb": 64, 00:16:42.023 "state": "online", 00:16:42.023 "raid_level": "raid5f", 00:16:42.023 "superblock": true, 00:16:42.023 "num_base_bdevs": 3, 00:16:42.023 "num_base_bdevs_discovered": 3, 00:16:42.023 "num_base_bdevs_operational": 3, 00:16:42.023 "base_bdevs_list": [ 00:16:42.023 { 00:16:42.023 "name": "pt1", 00:16:42.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.023 "is_configured": true, 00:16:42.023 "data_offset": 2048, 00:16:42.023 "data_size": 63488 00:16:42.023 }, 00:16:42.023 { 00:16:42.023 "name": "pt2", 00:16:42.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.023 "is_configured": true, 00:16:42.023 "data_offset": 2048, 00:16:42.023 "data_size": 63488 00:16:42.023 }, 00:16:42.023 { 00:16:42.023 "name": "pt3", 00:16:42.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.023 "is_configured": true, 00:16:42.023 "data_offset": 2048, 00:16:42.023 "data_size": 63488 00:16:42.023 } 00:16:42.023 ] 00:16:42.023 } 00:16:42.023 } 00:16:42.023 }' 00:16:42.023 14:33:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:42.023 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:42.023 pt2 00:16:42.023 pt3' 00:16:42.024 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.024 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:42.024 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.024 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:42.024 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.024 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.024 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:42.384 [2024-11-20 14:33:43.223256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=31c836cc-2fd0-4096-a044-7d826038e543 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 31c836cc-2fd0-4096-a044-7d826038e543 ']' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 [2024-11-20 14:33:43.275029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.384 [2024-11-20 14:33:43.275065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.384 [2024-11-20 14:33:43.275155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.384 [2024-11-20 14:33:43.275262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.384 [2024-11-20 14:33:43.275278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:42.384 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.642 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.642 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:42.642 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:42.642 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.643 [2024-11-20 14:33:43.427128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:42.643 [2024-11-20 14:33:43.429599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:42.643 [2024-11-20 14:33:43.429717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:42.643 [2024-11-20 14:33:43.429797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:42.643 [2024-11-20 14:33:43.429864] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:42.643 [2024-11-20 14:33:43.429899] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:42.643 [2024-11-20 14:33:43.429928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.643 [2024-11-20 14:33:43.429941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:42.643 request: 00:16:42.643 { 00:16:42.643 "name": "raid_bdev1", 00:16:42.643 "raid_level": "raid5f", 00:16:42.643 "base_bdevs": [ 00:16:42.643 "malloc1", 00:16:42.643 "malloc2", 00:16:42.643 "malloc3" 00:16:42.643 ], 00:16:42.643 "strip_size_kb": 64, 00:16:42.643 "superblock": false, 00:16:42.643 "method": "bdev_raid_create", 00:16:42.643 "req_id": 1 00:16:42.643 } 00:16:42.643 Got JSON-RPC error response 00:16:42.643 response: 00:16:42.643 { 00:16:42.643 "code": -17, 00:16:42.643 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:42.643 } 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.643 [2024-11-20 14:33:43.495085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.643 [2024-11-20 14:33:43.495786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.643 [2024-11-20 14:33:43.495831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:42.643 [2024-11-20 14:33:43.495848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.643 [2024-11-20 14:33:43.498840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.643 [2024-11-20 14:33:43.498885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.643 [2024-11-20 14:33:43.498984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:42.643 [2024-11-20 14:33:43.499057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.643 pt1 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.643 "name": "raid_bdev1", 00:16:42.643 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:42.643 "strip_size_kb": 64, 00:16:42.643 "state": "configuring", 00:16:42.643 "raid_level": "raid5f", 00:16:42.643 "superblock": true, 00:16:42.643 "num_base_bdevs": 3, 00:16:42.643 "num_base_bdevs_discovered": 1, 00:16:42.643 "num_base_bdevs_operational": 3, 00:16:42.643 "base_bdevs_list": [ 00:16:42.643 { 00:16:42.643 "name": "pt1", 00:16:42.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:42.643 "is_configured": true, 00:16:42.643 "data_offset": 2048, 00:16:42.643 "data_size": 63488 00:16:42.643 }, 00:16:42.643 { 00:16:42.643 "name": null, 00:16:42.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.643 "is_configured": false, 00:16:42.643 "data_offset": 2048, 00:16:42.643 "data_size": 63488 00:16:42.643 }, 00:16:42.643 { 00:16:42.643 "name": null, 00:16:42.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.643 "is_configured": false, 00:16:42.643 "data_offset": 2048, 00:16:42.643 "data_size": 63488 00:16:42.643 } 00:16:42.643 ] 00:16:42.643 }' 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.643 14:33:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.210 [2024-11-20 14:33:44.039468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.210 [2024-11-20 14:33:44.039706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.210 [2024-11-20 14:33:44.039756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:43.210 [2024-11-20 14:33:44.039774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.210 [2024-11-20 14:33:44.040366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.210 [2024-11-20 14:33:44.040408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.210 [2024-11-20 14:33:44.040548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:43.210 [2024-11-20 14:33:44.040601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.210 pt2 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.210 [2024-11-20 14:33:44.047439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.210 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.210 "name": "raid_bdev1", 00:16:43.210 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:43.210 "strip_size_kb": 64, 00:16:43.211 "state": "configuring", 00:16:43.211 "raid_level": "raid5f", 00:16:43.211 "superblock": true, 00:16:43.211 "num_base_bdevs": 3, 00:16:43.211 "num_base_bdevs_discovered": 1, 00:16:43.211 "num_base_bdevs_operational": 3, 00:16:43.211 "base_bdevs_list": [ 00:16:43.211 { 00:16:43.211 "name": "pt1", 00:16:43.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.211 "is_configured": true, 00:16:43.211 "data_offset": 2048, 00:16:43.211 "data_size": 63488 00:16:43.211 }, 00:16:43.211 { 00:16:43.211 "name": null, 00:16:43.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.211 "is_configured": false, 00:16:43.211 "data_offset": 0, 00:16:43.211 "data_size": 63488 00:16:43.211 }, 00:16:43.211 { 00:16:43.211 "name": null, 00:16:43.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.211 "is_configured": false, 00:16:43.211 "data_offset": 2048, 00:16:43.211 "data_size": 63488 00:16:43.211 } 00:16:43.211 ] 00:16:43.211 }' 00:16:43.211 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.211 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 [2024-11-20 14:33:44.599612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:43.777 [2024-11-20 14:33:44.599716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.777 [2024-11-20 14:33:44.599748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:43.777 [2024-11-20 14:33:44.599766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.777 [2024-11-20 14:33:44.600660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.777 [2024-11-20 14:33:44.600699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:43.777 [2024-11-20 14:33:44.600818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:43.777 [2024-11-20 14:33:44.600859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.777 pt2 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.777 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 [2024-11-20 14:33:44.607576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:43.777 [2024-11-20 14:33:44.607796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.777 [2024-11-20 14:33:44.607831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:43.777 [2024-11-20 14:33:44.607849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.777 [2024-11-20 14:33:44.608312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.777 [2024-11-20 14:33:44.608358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:43.777 [2024-11-20 14:33:44.608438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:43.777 [2024-11-20 14:33:44.608472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.777 [2024-11-20 14:33:44.608657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:43.777 [2024-11-20 14:33:44.608683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:43.778 [2024-11-20 14:33:44.608994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:43.778 [2024-11-20 14:33:44.613986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:43.778 [2024-11-20 14:33:44.614013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:43.778 [2024-11-20 14:33:44.614258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.778 pt3 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.778 "name": "raid_bdev1", 00:16:43.778 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:43.778 "strip_size_kb": 64, 00:16:43.778 "state": "online", 00:16:43.778 "raid_level": "raid5f", 00:16:43.778 "superblock": true, 00:16:43.778 "num_base_bdevs": 3, 00:16:43.778 "num_base_bdevs_discovered": 3, 00:16:43.778 "num_base_bdevs_operational": 3, 00:16:43.778 "base_bdevs_list": [ 00:16:43.778 { 00:16:43.778 "name": "pt1", 00:16:43.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:43.778 "is_configured": true, 00:16:43.778 "data_offset": 2048, 00:16:43.778 "data_size": 63488 00:16:43.778 }, 00:16:43.778 { 00:16:43.778 "name": "pt2", 00:16:43.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:43.778 "is_configured": true, 00:16:43.778 "data_offset": 2048, 00:16:43.778 "data_size": 63488 00:16:43.778 }, 00:16:43.778 { 00:16:43.778 "name": "pt3", 00:16:43.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:43.778 "is_configured": true, 00:16:43.778 "data_offset": 2048, 00:16:43.778 "data_size": 63488 00:16:43.778 } 00:16:43.778 ] 00:16:43.778 }' 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.778 14:33:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.345 [2024-11-20 14:33:45.132403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.345 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.345 "name": "raid_bdev1", 00:16:44.345 "aliases": [ 00:16:44.345 "31c836cc-2fd0-4096-a044-7d826038e543" 00:16:44.345 ], 00:16:44.345 "product_name": "Raid Volume", 00:16:44.345 "block_size": 512, 00:16:44.345 "num_blocks": 126976, 00:16:44.345 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:44.345 "assigned_rate_limits": { 00:16:44.345 "rw_ios_per_sec": 0, 00:16:44.345 "rw_mbytes_per_sec": 0, 00:16:44.345 "r_mbytes_per_sec": 0, 00:16:44.345 "w_mbytes_per_sec": 0 00:16:44.345 }, 00:16:44.345 "claimed": false, 00:16:44.345 "zoned": false, 00:16:44.345 "supported_io_types": { 00:16:44.345 "read": true, 00:16:44.345 "write": true, 00:16:44.345 "unmap": false, 00:16:44.345 "flush": false, 00:16:44.345 "reset": true, 00:16:44.345 "nvme_admin": false, 00:16:44.345 "nvme_io": false, 00:16:44.345 "nvme_io_md": false, 00:16:44.345 "write_zeroes": true, 00:16:44.345 "zcopy": false, 00:16:44.345 "get_zone_info": false, 00:16:44.346 "zone_management": false, 00:16:44.346 "zone_append": false, 00:16:44.346 "compare": false, 00:16:44.346 "compare_and_write": false, 00:16:44.346 "abort": false, 00:16:44.346 "seek_hole": false, 00:16:44.346 "seek_data": false, 00:16:44.346 "copy": false, 00:16:44.346 "nvme_iov_md": false 00:16:44.346 }, 00:16:44.346 "driver_specific": { 00:16:44.346 "raid": { 00:16:44.346 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:44.346 "strip_size_kb": 64, 00:16:44.346 "state": "online", 00:16:44.346 "raid_level": "raid5f", 00:16:44.346 "superblock": true, 00:16:44.346 "num_base_bdevs": 3, 00:16:44.346 "num_base_bdevs_discovered": 3, 00:16:44.346 "num_base_bdevs_operational": 3, 00:16:44.346 "base_bdevs_list": [ 00:16:44.346 { 00:16:44.346 "name": "pt1", 00:16:44.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:44.346 "is_configured": true, 00:16:44.346 "data_offset": 2048, 00:16:44.346 "data_size": 63488 00:16:44.346 }, 00:16:44.346 { 00:16:44.346 "name": "pt2", 00:16:44.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.346 "is_configured": true, 00:16:44.346 "data_offset": 2048, 00:16:44.346 "data_size": 63488 00:16:44.346 }, 00:16:44.346 { 00:16:44.346 "name": "pt3", 00:16:44.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.346 "is_configured": true, 00:16:44.346 "data_offset": 2048, 00:16:44.346 "data_size": 63488 00:16:44.346 } 00:16:44.346 ] 00:16:44.346 } 00:16:44.346 } 00:16:44.346 }' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:44.346 pt2 00:16:44.346 pt3' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.346 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 [2024-11-20 14:33:45.460355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 31c836cc-2fd0-4096-a044-7d826038e543 '!=' 31c836cc-2fd0-4096-a044-7d826038e543 ']' 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 [2024-11-20 14:33:45.508210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.605 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.605 "name": "raid_bdev1", 00:16:44.605 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:44.605 "strip_size_kb": 64, 00:16:44.605 "state": "online", 00:16:44.605 "raid_level": "raid5f", 00:16:44.605 "superblock": true, 00:16:44.605 "num_base_bdevs": 3, 00:16:44.605 "num_base_bdevs_discovered": 2, 00:16:44.605 "num_base_bdevs_operational": 2, 00:16:44.605 "base_bdevs_list": [ 00:16:44.605 { 00:16:44.605 "name": null, 00:16:44.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.605 "is_configured": false, 00:16:44.605 "data_offset": 0, 00:16:44.605 "data_size": 63488 00:16:44.605 }, 00:16:44.605 { 00:16:44.605 "name": "pt2", 00:16:44.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.605 "is_configured": true, 00:16:44.605 "data_offset": 2048, 00:16:44.605 "data_size": 63488 00:16:44.605 }, 00:16:44.605 { 00:16:44.605 "name": "pt3", 00:16:44.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.606 "is_configured": true, 00:16:44.606 "data_offset": 2048, 00:16:44.606 "data_size": 63488 00:16:44.606 } 00:16:44.606 ] 00:16:44.606 }' 00:16:44.606 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.606 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.172 14:33:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:45.172 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.172 14:33:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.172 [2024-11-20 14:33:46.000333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.172 [2024-11-20 14:33:46.000370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.172 [2024-11-20 14:33:46.000474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.172 [2024-11-20 14:33:46.000577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.172 [2024-11-20 14:33:46.000601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.172 [2024-11-20 14:33:46.072300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:45.172 [2024-11-20 14:33:46.072369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.172 [2024-11-20 14:33:46.072397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:45.172 [2024-11-20 14:33:46.072415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.172 [2024-11-20 14:33:46.075406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.172 [2024-11-20 14:33:46.075456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:45.172 [2024-11-20 14:33:46.075588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:45.172 [2024-11-20 14:33:46.075654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:45.172 pt2 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.172 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.173 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.173 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.173 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.173 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.173 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.173 "name": "raid_bdev1", 00:16:45.173 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:45.173 "strip_size_kb": 64, 00:16:45.173 "state": "configuring", 00:16:45.173 "raid_level": "raid5f", 00:16:45.173 "superblock": true, 00:16:45.173 "num_base_bdevs": 3, 00:16:45.173 "num_base_bdevs_discovered": 1, 00:16:45.173 "num_base_bdevs_operational": 2, 00:16:45.173 "base_bdevs_list": [ 00:16:45.173 { 00:16:45.173 "name": null, 00:16:45.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.173 "is_configured": false, 00:16:45.173 "data_offset": 2048, 00:16:45.173 "data_size": 63488 00:16:45.173 }, 00:16:45.173 { 00:16:45.173 "name": "pt2", 00:16:45.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.173 "is_configured": true, 00:16:45.173 "data_offset": 2048, 00:16:45.173 "data_size": 63488 00:16:45.173 }, 00:16:45.173 { 00:16:45.173 "name": null, 00:16:45.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.173 "is_configured": false, 00:16:45.173 "data_offset": 2048, 00:16:45.173 "data_size": 63488 00:16:45.173 } 00:16:45.173 ] 00:16:45.173 }' 00:16:45.173 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.173 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.739 [2024-11-20 14:33:46.616479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:45.739 [2024-11-20 14:33:46.616574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.739 [2024-11-20 14:33:46.616611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:45.739 [2024-11-20 14:33:46.616653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.739 [2024-11-20 14:33:46.617288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.739 [2024-11-20 14:33:46.617336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:45.739 [2024-11-20 14:33:46.617446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:45.739 [2024-11-20 14:33:46.617499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:45.739 [2024-11-20 14:33:46.617679] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:45.739 [2024-11-20 14:33:46.617704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:45.739 [2024-11-20 14:33:46.618029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:45.739 [2024-11-20 14:33:46.623223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:45.739 [2024-11-20 14:33:46.623370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:45.739 [2024-11-20 14:33:46.623946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.739 pt3 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.739 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.739 "name": "raid_bdev1", 00:16:45.739 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:45.739 "strip_size_kb": 64, 00:16:45.739 "state": "online", 00:16:45.739 "raid_level": "raid5f", 00:16:45.739 "superblock": true, 00:16:45.739 "num_base_bdevs": 3, 00:16:45.739 "num_base_bdevs_discovered": 2, 00:16:45.739 "num_base_bdevs_operational": 2, 00:16:45.740 "base_bdevs_list": [ 00:16:45.740 { 00:16:45.740 "name": null, 00:16:45.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.740 "is_configured": false, 00:16:45.740 "data_offset": 2048, 00:16:45.740 "data_size": 63488 00:16:45.740 }, 00:16:45.740 { 00:16:45.740 "name": "pt2", 00:16:45.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.740 "is_configured": true, 00:16:45.740 "data_offset": 2048, 00:16:45.740 "data_size": 63488 00:16:45.740 }, 00:16:45.740 { 00:16:45.740 "name": "pt3", 00:16:45.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.740 "is_configured": true, 00:16:45.740 "data_offset": 2048, 00:16:45.740 "data_size": 63488 00:16:45.740 } 00:16:45.740 ] 00:16:45.740 }' 00:16:45.740 14:33:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.740 14:33:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 [2024-11-20 14:33:47.145828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.306 [2024-11-20 14:33:47.145999] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.306 [2024-11-20 14:33:47.146138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.306 [2024-11-20 14:33:47.146232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.306 [2024-11-20 14:33:47.146249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.306 [2024-11-20 14:33:47.213844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:46.306 [2024-11-20 14:33:47.213914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.306 [2024-11-20 14:33:47.213946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:46.306 [2024-11-20 14:33:47.213962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.306 [2024-11-20 14:33:47.217052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.306 [2024-11-20 14:33:47.217096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:46.306 [2024-11-20 14:33:47.217214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:46.306 [2024-11-20 14:33:47.217278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:46.306 [2024-11-20 14:33:47.217457] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:46.306 [2024-11-20 14:33:47.217475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.306 [2024-11-20 14:33:47.217498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:46.306 [2024-11-20 14:33:47.217580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:46.306 pt1 00:16:46.306 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.307 "name": "raid_bdev1", 00:16:46.307 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:46.307 "strip_size_kb": 64, 00:16:46.307 "state": "configuring", 00:16:46.307 "raid_level": "raid5f", 00:16:46.307 "superblock": true, 00:16:46.307 "num_base_bdevs": 3, 00:16:46.307 "num_base_bdevs_discovered": 1, 00:16:46.307 "num_base_bdevs_operational": 2, 00:16:46.307 "base_bdevs_list": [ 00:16:46.307 { 00:16:46.307 "name": null, 00:16:46.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.307 "is_configured": false, 00:16:46.307 "data_offset": 2048, 00:16:46.307 "data_size": 63488 00:16:46.307 }, 00:16:46.307 { 00:16:46.307 "name": "pt2", 00:16:46.307 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.307 "is_configured": true, 00:16:46.307 "data_offset": 2048, 00:16:46.307 "data_size": 63488 00:16:46.307 }, 00:16:46.307 { 00:16:46.307 "name": null, 00:16:46.307 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.307 "is_configured": false, 00:16:46.307 "data_offset": 2048, 00:16:46.307 "data_size": 63488 00:16:46.307 } 00:16:46.307 ] 00:16:46.307 }' 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.307 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.873 [2024-11-20 14:33:47.786008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:46.873 [2024-11-20 14:33:47.786089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.873 [2024-11-20 14:33:47.786135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:46.873 [2024-11-20 14:33:47.786155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.873 [2024-11-20 14:33:47.786815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.873 [2024-11-20 14:33:47.786854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:46.873 [2024-11-20 14:33:47.786960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:46.873 [2024-11-20 14:33:47.786994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:46.873 [2024-11-20 14:33:47.787158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:46.873 [2024-11-20 14:33:47.787174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:46.873 [2024-11-20 14:33:47.787501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:46.873 [2024-11-20 14:33:47.792620] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:46.873 pt3 00:16:46.873 [2024-11-20 14:33:47.792832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:46.873 [2024-11-20 14:33:47.793155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.873 "name": "raid_bdev1", 00:16:46.873 "uuid": "31c836cc-2fd0-4096-a044-7d826038e543", 00:16:46.873 "strip_size_kb": 64, 00:16:46.873 "state": "online", 00:16:46.873 "raid_level": "raid5f", 00:16:46.873 "superblock": true, 00:16:46.873 "num_base_bdevs": 3, 00:16:46.873 "num_base_bdevs_discovered": 2, 00:16:46.873 "num_base_bdevs_operational": 2, 00:16:46.873 "base_bdevs_list": [ 00:16:46.873 { 00:16:46.873 "name": null, 00:16:46.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.873 "is_configured": false, 00:16:46.873 "data_offset": 2048, 00:16:46.873 "data_size": 63488 00:16:46.873 }, 00:16:46.873 { 00:16:46.873 "name": "pt2", 00:16:46.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:46.873 "is_configured": true, 00:16:46.873 "data_offset": 2048, 00:16:46.873 "data_size": 63488 00:16:46.873 }, 00:16:46.873 { 00:16:46.873 "name": "pt3", 00:16:46.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:46.873 "is_configured": true, 00:16:46.873 "data_offset": 2048, 00:16:46.873 "data_size": 63488 00:16:46.873 } 00:16:46.873 ] 00:16:46.873 }' 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.873 14:33:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.442 [2024-11-20 14:33:48.375278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 31c836cc-2fd0-4096-a044-7d826038e543 '!=' 31c836cc-2fd0-4096-a044-7d826038e543 ']' 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81581 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81581 ']' 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81581 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81581 00:16:47.442 killing process with pid 81581 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81581' 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81581 00:16:47.442 [2024-11-20 14:33:48.462026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.442 14:33:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81581 00:16:47.442 [2024-11-20 14:33:48.462167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.442 [2024-11-20 14:33:48.462271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.442 [2024-11-20 14:33:48.462302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:47.703 [2024-11-20 14:33:48.747469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.112 14:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:49.112 00:16:49.112 real 0m8.704s 00:16:49.112 user 0m14.085s 00:16:49.113 sys 0m1.312s 00:16:49.113 ************************************ 00:16:49.113 END TEST raid5f_superblock_test 00:16:49.113 ************************************ 00:16:49.113 14:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.113 14:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.113 14:33:49 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:49.113 14:33:49 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:49.113 14:33:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:49.113 14:33:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.113 14:33:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.113 ************************************ 00:16:49.113 START TEST raid5f_rebuild_test 00:16:49.113 ************************************ 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82038 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82038 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82038 ']' 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.113 14:33:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.113 [2024-11-20 14:33:50.015502] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:16:49.113 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:49.113 Zero copy mechanism will not be used. 00:16:49.113 [2024-11-20 14:33:50.016329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82038 ] 00:16:49.371 [2024-11-20 14:33:50.198655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.371 [2024-11-20 14:33:50.341069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.628 [2024-11-20 14:33:50.548572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.628 [2024-11-20 14:33:50.548644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.195 14:33:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.195 14:33:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:50.195 14:33:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.195 14:33:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:50.195 14:33:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 BaseBdev1_malloc 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 [2024-11-20 14:33:51.036460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.195 [2024-11-20 14:33:51.036719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.195 [2024-11-20 14:33:51.036766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.195 [2024-11-20 14:33:51.036787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.195 [2024-11-20 14:33:51.039595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.195 [2024-11-20 14:33:51.039665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.195 BaseBdev1 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 BaseBdev2_malloc 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 [2024-11-20 14:33:51.084718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:50.195 [2024-11-20 14:33:51.084805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.195 [2024-11-20 14:33:51.084841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.195 [2024-11-20 14:33:51.084860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.195 [2024-11-20 14:33:51.087615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.195 [2024-11-20 14:33:51.087678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:50.195 BaseBdev2 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 BaseBdev3_malloc 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 [2024-11-20 14:33:51.149705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:50.195 [2024-11-20 14:33:51.149908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.195 [2024-11-20 14:33:51.150056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:50.195 [2024-11-20 14:33:51.150190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.195 [2024-11-20 14:33:51.153005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.195 [2024-11-20 14:33:51.153168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:50.195 BaseBdev3 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 spare_malloc 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 spare_delay 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 [2024-11-20 14:33:51.209939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.195 [2024-11-20 14:33:51.210013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.195 [2024-11-20 14:33:51.210042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:50.195 [2024-11-20 14:33:51.210061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.195 [2024-11-20 14:33:51.213028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.195 [2024-11-20 14:33:51.213082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.195 spare 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.195 [2024-11-20 14:33:51.218027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.195 [2024-11-20 14:33:51.220467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.195 [2024-11-20 14:33:51.220570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.195 [2024-11-20 14:33:51.220713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.195 [2024-11-20 14:33:51.220733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:50.195 [2024-11-20 14:33:51.221057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:50.195 [2024-11-20 14:33:51.226279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.195 [2024-11-20 14:33:51.226316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.195 [2024-11-20 14:33:51.226548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.195 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.196 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.196 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.196 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.196 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.196 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.454 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.454 "name": "raid_bdev1", 00:16:50.454 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:50.454 "strip_size_kb": 64, 00:16:50.454 "state": "online", 00:16:50.454 "raid_level": "raid5f", 00:16:50.454 "superblock": false, 00:16:50.454 "num_base_bdevs": 3, 00:16:50.454 "num_base_bdevs_discovered": 3, 00:16:50.454 "num_base_bdevs_operational": 3, 00:16:50.454 "base_bdevs_list": [ 00:16:50.454 { 00:16:50.454 "name": "BaseBdev1", 00:16:50.454 "uuid": "4af854cb-d662-50a6-82da-2728d5ad637c", 00:16:50.454 "is_configured": true, 00:16:50.454 "data_offset": 0, 00:16:50.454 "data_size": 65536 00:16:50.454 }, 00:16:50.454 { 00:16:50.454 "name": "BaseBdev2", 00:16:50.454 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:50.454 "is_configured": true, 00:16:50.454 "data_offset": 0, 00:16:50.454 "data_size": 65536 00:16:50.454 }, 00:16:50.454 { 00:16:50.454 "name": "BaseBdev3", 00:16:50.454 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:50.454 "is_configured": true, 00:16:50.454 "data_offset": 0, 00:16:50.454 "data_size": 65536 00:16:50.454 } 00:16:50.454 ] 00:16:50.454 }' 00:16:50.454 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.454 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 [2024-11-20 14:33:51.780722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:51.021 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.022 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.022 14:33:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:51.280 [2024-11-20 14:33:52.152579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:51.280 /dev/nbd0 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.280 1+0 records in 00:16:51.280 1+0 records out 00:16:51.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029536 s, 13.9 MB/s 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:51.280 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:51.847 512+0 records in 00:16:51.847 512+0 records out 00:16:51.847 67108864 bytes (67 MB, 64 MiB) copied, 0.468525 s, 143 MB/s 00:16:51.847 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:51.847 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.847 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.847 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.847 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:51.847 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.847 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:52.106 [2024-11-20 14:33:52.933961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.106 [2024-11-20 14:33:52.963866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.106 14:33:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.106 14:33:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.106 "name": "raid_bdev1", 00:16:52.106 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:52.106 "strip_size_kb": 64, 00:16:52.106 "state": "online", 00:16:52.106 "raid_level": "raid5f", 00:16:52.106 "superblock": false, 00:16:52.106 "num_base_bdevs": 3, 00:16:52.106 "num_base_bdevs_discovered": 2, 00:16:52.106 "num_base_bdevs_operational": 2, 00:16:52.106 "base_bdevs_list": [ 00:16:52.106 { 00:16:52.106 "name": null, 00:16:52.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.106 "is_configured": false, 00:16:52.106 "data_offset": 0, 00:16:52.106 "data_size": 65536 00:16:52.106 }, 00:16:52.106 { 00:16:52.106 "name": "BaseBdev2", 00:16:52.106 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:52.106 "is_configured": true, 00:16:52.106 "data_offset": 0, 00:16:52.106 "data_size": 65536 00:16:52.106 }, 00:16:52.106 { 00:16:52.106 "name": "BaseBdev3", 00:16:52.106 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:52.106 "is_configured": true, 00:16:52.106 "data_offset": 0, 00:16:52.106 "data_size": 65536 00:16:52.106 } 00:16:52.106 ] 00:16:52.106 }' 00:16:52.106 14:33:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.106 14:33:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.673 14:33:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:52.673 14:33:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.673 14:33:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.673 [2024-11-20 14:33:53.460021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.673 [2024-11-20 14:33:53.475459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:52.673 14:33:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.673 14:33:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:52.673 [2024-11-20 14:33:53.483053] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.607 "name": "raid_bdev1", 00:16:53.607 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:53.607 "strip_size_kb": 64, 00:16:53.607 "state": "online", 00:16:53.607 "raid_level": "raid5f", 00:16:53.607 "superblock": false, 00:16:53.607 "num_base_bdevs": 3, 00:16:53.607 "num_base_bdevs_discovered": 3, 00:16:53.607 "num_base_bdevs_operational": 3, 00:16:53.607 "process": { 00:16:53.607 "type": "rebuild", 00:16:53.607 "target": "spare", 00:16:53.607 "progress": { 00:16:53.607 "blocks": 18432, 00:16:53.607 "percent": 14 00:16:53.607 } 00:16:53.607 }, 00:16:53.607 "base_bdevs_list": [ 00:16:53.607 { 00:16:53.607 "name": "spare", 00:16:53.607 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:16:53.607 "is_configured": true, 00:16:53.607 "data_offset": 0, 00:16:53.607 "data_size": 65536 00:16:53.607 }, 00:16:53.607 { 00:16:53.607 "name": "BaseBdev2", 00:16:53.607 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:53.607 "is_configured": true, 00:16:53.607 "data_offset": 0, 00:16:53.607 "data_size": 65536 00:16:53.607 }, 00:16:53.607 { 00:16:53.607 "name": "BaseBdev3", 00:16:53.607 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:53.607 "is_configured": true, 00:16:53.607 "data_offset": 0, 00:16:53.607 "data_size": 65536 00:16:53.607 } 00:16:53.607 ] 00:16:53.607 }' 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.607 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.607 [2024-11-20 14:33:54.661103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.865 [2024-11-20 14:33:54.697377] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:53.865 [2024-11-20 14:33:54.697453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.865 [2024-11-20 14:33:54.697482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.865 [2024-11-20 14:33:54.697494] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.865 "name": "raid_bdev1", 00:16:53.865 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:53.865 "strip_size_kb": 64, 00:16:53.865 "state": "online", 00:16:53.865 "raid_level": "raid5f", 00:16:53.865 "superblock": false, 00:16:53.865 "num_base_bdevs": 3, 00:16:53.865 "num_base_bdevs_discovered": 2, 00:16:53.865 "num_base_bdevs_operational": 2, 00:16:53.865 "base_bdevs_list": [ 00:16:53.865 { 00:16:53.865 "name": null, 00:16:53.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.865 "is_configured": false, 00:16:53.865 "data_offset": 0, 00:16:53.865 "data_size": 65536 00:16:53.865 }, 00:16:53.865 { 00:16:53.865 "name": "BaseBdev2", 00:16:53.865 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:53.865 "is_configured": true, 00:16:53.865 "data_offset": 0, 00:16:53.865 "data_size": 65536 00:16:53.865 }, 00:16:53.865 { 00:16:53.865 "name": "BaseBdev3", 00:16:53.865 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:53.865 "is_configured": true, 00:16:53.865 "data_offset": 0, 00:16:53.865 "data_size": 65536 00:16:53.865 } 00:16:53.865 ] 00:16:53.865 }' 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.865 14:33:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.432 "name": "raid_bdev1", 00:16:54.432 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:54.432 "strip_size_kb": 64, 00:16:54.432 "state": "online", 00:16:54.432 "raid_level": "raid5f", 00:16:54.432 "superblock": false, 00:16:54.432 "num_base_bdevs": 3, 00:16:54.432 "num_base_bdevs_discovered": 2, 00:16:54.432 "num_base_bdevs_operational": 2, 00:16:54.432 "base_bdevs_list": [ 00:16:54.432 { 00:16:54.432 "name": null, 00:16:54.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.432 "is_configured": false, 00:16:54.432 "data_offset": 0, 00:16:54.432 "data_size": 65536 00:16:54.432 }, 00:16:54.432 { 00:16:54.432 "name": "BaseBdev2", 00:16:54.432 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:54.432 "is_configured": true, 00:16:54.432 "data_offset": 0, 00:16:54.432 "data_size": 65536 00:16:54.432 }, 00:16:54.432 { 00:16:54.432 "name": "BaseBdev3", 00:16:54.432 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:54.432 "is_configured": true, 00:16:54.432 "data_offset": 0, 00:16:54.432 "data_size": 65536 00:16:54.432 } 00:16:54.432 ] 00:16:54.432 }' 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.432 [2024-11-20 14:33:55.396559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.432 [2024-11-20 14:33:55.411600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.432 14:33:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:54.433 [2024-11-20 14:33:55.418926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.624 "name": "raid_bdev1", 00:16:55.624 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:55.624 "strip_size_kb": 64, 00:16:55.624 "state": "online", 00:16:55.624 "raid_level": "raid5f", 00:16:55.624 "superblock": false, 00:16:55.624 "num_base_bdevs": 3, 00:16:55.624 "num_base_bdevs_discovered": 3, 00:16:55.624 "num_base_bdevs_operational": 3, 00:16:55.624 "process": { 00:16:55.624 "type": "rebuild", 00:16:55.624 "target": "spare", 00:16:55.624 "progress": { 00:16:55.624 "blocks": 18432, 00:16:55.624 "percent": 14 00:16:55.624 } 00:16:55.624 }, 00:16:55.624 "base_bdevs_list": [ 00:16:55.624 { 00:16:55.624 "name": "spare", 00:16:55.624 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:16:55.624 "is_configured": true, 00:16:55.624 "data_offset": 0, 00:16:55.624 "data_size": 65536 00:16:55.624 }, 00:16:55.624 { 00:16:55.624 "name": "BaseBdev2", 00:16:55.624 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:55.624 "is_configured": true, 00:16:55.624 "data_offset": 0, 00:16:55.624 "data_size": 65536 00:16:55.624 }, 00:16:55.624 { 00:16:55.624 "name": "BaseBdev3", 00:16:55.624 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:55.624 "is_configured": true, 00:16:55.624 "data_offset": 0, 00:16:55.624 "data_size": 65536 00:16:55.624 } 00:16:55.624 ] 00:16:55.624 }' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=598 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.624 "name": "raid_bdev1", 00:16:55.624 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:55.624 "strip_size_kb": 64, 00:16:55.624 "state": "online", 00:16:55.624 "raid_level": "raid5f", 00:16:55.624 "superblock": false, 00:16:55.624 "num_base_bdevs": 3, 00:16:55.624 "num_base_bdevs_discovered": 3, 00:16:55.624 "num_base_bdevs_operational": 3, 00:16:55.624 "process": { 00:16:55.624 "type": "rebuild", 00:16:55.624 "target": "spare", 00:16:55.624 "progress": { 00:16:55.624 "blocks": 22528, 00:16:55.624 "percent": 17 00:16:55.624 } 00:16:55.624 }, 00:16:55.624 "base_bdevs_list": [ 00:16:55.624 { 00:16:55.624 "name": "spare", 00:16:55.624 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:16:55.624 "is_configured": true, 00:16:55.624 "data_offset": 0, 00:16:55.624 "data_size": 65536 00:16:55.624 }, 00:16:55.624 { 00:16:55.624 "name": "BaseBdev2", 00:16:55.624 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:55.624 "is_configured": true, 00:16:55.624 "data_offset": 0, 00:16:55.624 "data_size": 65536 00:16:55.624 }, 00:16:55.624 { 00:16:55.624 "name": "BaseBdev3", 00:16:55.624 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:55.624 "is_configured": true, 00:16:55.624 "data_offset": 0, 00:16:55.624 "data_size": 65536 00:16:55.624 } 00:16:55.624 ] 00:16:55.624 }' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.624 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.882 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.882 14:33:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.817 "name": "raid_bdev1", 00:16:56.817 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:56.817 "strip_size_kb": 64, 00:16:56.817 "state": "online", 00:16:56.817 "raid_level": "raid5f", 00:16:56.817 "superblock": false, 00:16:56.817 "num_base_bdevs": 3, 00:16:56.817 "num_base_bdevs_discovered": 3, 00:16:56.817 "num_base_bdevs_operational": 3, 00:16:56.817 "process": { 00:16:56.817 "type": "rebuild", 00:16:56.817 "target": "spare", 00:16:56.817 "progress": { 00:16:56.817 "blocks": 45056, 00:16:56.817 "percent": 34 00:16:56.817 } 00:16:56.817 }, 00:16:56.817 "base_bdevs_list": [ 00:16:56.817 { 00:16:56.817 "name": "spare", 00:16:56.817 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:16:56.817 "is_configured": true, 00:16:56.817 "data_offset": 0, 00:16:56.817 "data_size": 65536 00:16:56.817 }, 00:16:56.817 { 00:16:56.817 "name": "BaseBdev2", 00:16:56.817 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:56.817 "is_configured": true, 00:16:56.817 "data_offset": 0, 00:16:56.817 "data_size": 65536 00:16:56.817 }, 00:16:56.817 { 00:16:56.817 "name": "BaseBdev3", 00:16:56.817 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:56.817 "is_configured": true, 00:16:56.817 "data_offset": 0, 00:16:56.817 "data_size": 65536 00:16:56.817 } 00:16:56.817 ] 00:16:56.817 }' 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.817 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.075 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.075 14:33:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.087 "name": "raid_bdev1", 00:16:58.087 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:58.087 "strip_size_kb": 64, 00:16:58.087 "state": "online", 00:16:58.087 "raid_level": "raid5f", 00:16:58.087 "superblock": false, 00:16:58.087 "num_base_bdevs": 3, 00:16:58.087 "num_base_bdevs_discovered": 3, 00:16:58.087 "num_base_bdevs_operational": 3, 00:16:58.087 "process": { 00:16:58.087 "type": "rebuild", 00:16:58.087 "target": "spare", 00:16:58.087 "progress": { 00:16:58.087 "blocks": 69632, 00:16:58.087 "percent": 53 00:16:58.087 } 00:16:58.087 }, 00:16:58.087 "base_bdevs_list": [ 00:16:58.087 { 00:16:58.087 "name": "spare", 00:16:58.087 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:16:58.087 "is_configured": true, 00:16:58.087 "data_offset": 0, 00:16:58.087 "data_size": 65536 00:16:58.087 }, 00:16:58.087 { 00:16:58.087 "name": "BaseBdev2", 00:16:58.087 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:58.087 "is_configured": true, 00:16:58.087 "data_offset": 0, 00:16:58.087 "data_size": 65536 00:16:58.087 }, 00:16:58.087 { 00:16:58.087 "name": "BaseBdev3", 00:16:58.087 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:58.087 "is_configured": true, 00:16:58.087 "data_offset": 0, 00:16:58.087 "data_size": 65536 00:16:58.087 } 00:16:58.087 ] 00:16:58.087 }' 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.087 14:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.087 14:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.087 14:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.019 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.277 14:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.277 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.277 "name": "raid_bdev1", 00:16:59.277 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:16:59.277 "strip_size_kb": 64, 00:16:59.277 "state": "online", 00:16:59.277 "raid_level": "raid5f", 00:16:59.277 "superblock": false, 00:16:59.277 "num_base_bdevs": 3, 00:16:59.277 "num_base_bdevs_discovered": 3, 00:16:59.277 "num_base_bdevs_operational": 3, 00:16:59.277 "process": { 00:16:59.277 "type": "rebuild", 00:16:59.277 "target": "spare", 00:16:59.277 "progress": { 00:16:59.277 "blocks": 92160, 00:16:59.277 "percent": 70 00:16:59.277 } 00:16:59.277 }, 00:16:59.277 "base_bdevs_list": [ 00:16:59.277 { 00:16:59.277 "name": "spare", 00:16:59.277 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:16:59.277 "is_configured": true, 00:16:59.277 "data_offset": 0, 00:16:59.277 "data_size": 65536 00:16:59.277 }, 00:16:59.277 { 00:16:59.277 "name": "BaseBdev2", 00:16:59.277 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:16:59.277 "is_configured": true, 00:16:59.277 "data_offset": 0, 00:16:59.277 "data_size": 65536 00:16:59.277 }, 00:16:59.277 { 00:16:59.277 "name": "BaseBdev3", 00:16:59.277 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:16:59.277 "is_configured": true, 00:16:59.277 "data_offset": 0, 00:16:59.277 "data_size": 65536 00:16:59.277 } 00:16:59.277 ] 00:16:59.277 }' 00:16:59.277 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.277 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.277 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.277 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.277 14:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.212 14:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.470 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.470 "name": "raid_bdev1", 00:17:00.470 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:17:00.470 "strip_size_kb": 64, 00:17:00.470 "state": "online", 00:17:00.470 "raid_level": "raid5f", 00:17:00.470 "superblock": false, 00:17:00.470 "num_base_bdevs": 3, 00:17:00.470 "num_base_bdevs_discovered": 3, 00:17:00.470 "num_base_bdevs_operational": 3, 00:17:00.470 "process": { 00:17:00.470 "type": "rebuild", 00:17:00.470 "target": "spare", 00:17:00.470 "progress": { 00:17:00.470 "blocks": 116736, 00:17:00.470 "percent": 89 00:17:00.470 } 00:17:00.470 }, 00:17:00.470 "base_bdevs_list": [ 00:17:00.470 { 00:17:00.470 "name": "spare", 00:17:00.470 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:17:00.470 "is_configured": true, 00:17:00.470 "data_offset": 0, 00:17:00.470 "data_size": 65536 00:17:00.470 }, 00:17:00.470 { 00:17:00.470 "name": "BaseBdev2", 00:17:00.470 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:17:00.470 "is_configured": true, 00:17:00.470 "data_offset": 0, 00:17:00.470 "data_size": 65536 00:17:00.470 }, 00:17:00.470 { 00:17:00.470 "name": "BaseBdev3", 00:17:00.470 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:17:00.470 "is_configured": true, 00:17:00.470 "data_offset": 0, 00:17:00.470 "data_size": 65536 00:17:00.470 } 00:17:00.470 ] 00:17:00.470 }' 00:17:00.470 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.470 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.470 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.470 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.470 14:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.035 [2024-11-20 14:34:01.893976] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:01.035 [2024-11-20 14:34:01.894109] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:01.035 [2024-11-20 14:34:01.894200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.600 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.601 "name": "raid_bdev1", 00:17:01.601 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:17:01.601 "strip_size_kb": 64, 00:17:01.601 "state": "online", 00:17:01.601 "raid_level": "raid5f", 00:17:01.601 "superblock": false, 00:17:01.601 "num_base_bdevs": 3, 00:17:01.601 "num_base_bdevs_discovered": 3, 00:17:01.601 "num_base_bdevs_operational": 3, 00:17:01.601 "base_bdevs_list": [ 00:17:01.601 { 00:17:01.601 "name": "spare", 00:17:01.601 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:17:01.601 "is_configured": true, 00:17:01.601 "data_offset": 0, 00:17:01.601 "data_size": 65536 00:17:01.601 }, 00:17:01.601 { 00:17:01.601 "name": "BaseBdev2", 00:17:01.601 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:17:01.601 "is_configured": true, 00:17:01.601 "data_offset": 0, 00:17:01.601 "data_size": 65536 00:17:01.601 }, 00:17:01.601 { 00:17:01.601 "name": "BaseBdev3", 00:17:01.601 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:17:01.601 "is_configured": true, 00:17:01.601 "data_offset": 0, 00:17:01.601 "data_size": 65536 00:17:01.601 } 00:17:01.601 ] 00:17:01.601 }' 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.601 "name": "raid_bdev1", 00:17:01.601 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:17:01.601 "strip_size_kb": 64, 00:17:01.601 "state": "online", 00:17:01.601 "raid_level": "raid5f", 00:17:01.601 "superblock": false, 00:17:01.601 "num_base_bdevs": 3, 00:17:01.601 "num_base_bdevs_discovered": 3, 00:17:01.601 "num_base_bdevs_operational": 3, 00:17:01.601 "base_bdevs_list": [ 00:17:01.601 { 00:17:01.601 "name": "spare", 00:17:01.601 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:17:01.601 "is_configured": true, 00:17:01.601 "data_offset": 0, 00:17:01.601 "data_size": 65536 00:17:01.601 }, 00:17:01.601 { 00:17:01.601 "name": "BaseBdev2", 00:17:01.601 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:17:01.601 "is_configured": true, 00:17:01.601 "data_offset": 0, 00:17:01.601 "data_size": 65536 00:17:01.601 }, 00:17:01.601 { 00:17:01.601 "name": "BaseBdev3", 00:17:01.601 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:17:01.601 "is_configured": true, 00:17:01.601 "data_offset": 0, 00:17:01.601 "data_size": 65536 00:17:01.601 } 00:17:01.601 ] 00:17:01.601 }' 00:17:01.601 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.859 "name": "raid_bdev1", 00:17:01.859 "uuid": "f6c9f200-e4c8-4572-aec7-98441c78d6f7", 00:17:01.859 "strip_size_kb": 64, 00:17:01.859 "state": "online", 00:17:01.859 "raid_level": "raid5f", 00:17:01.859 "superblock": false, 00:17:01.859 "num_base_bdevs": 3, 00:17:01.859 "num_base_bdevs_discovered": 3, 00:17:01.859 "num_base_bdevs_operational": 3, 00:17:01.859 "base_bdevs_list": [ 00:17:01.859 { 00:17:01.859 "name": "spare", 00:17:01.859 "uuid": "d806ab37-6366-5cc4-b41f-0c2a9c7f9aa3", 00:17:01.859 "is_configured": true, 00:17:01.859 "data_offset": 0, 00:17:01.859 "data_size": 65536 00:17:01.859 }, 00:17:01.859 { 00:17:01.859 "name": "BaseBdev2", 00:17:01.859 "uuid": "64f74b8a-aea1-5883-a2ec-b2b8da65ce9e", 00:17:01.859 "is_configured": true, 00:17:01.859 "data_offset": 0, 00:17:01.859 "data_size": 65536 00:17:01.859 }, 00:17:01.859 { 00:17:01.859 "name": "BaseBdev3", 00:17:01.859 "uuid": "07e17901-61b5-52f6-b132-bc32ed628da4", 00:17:01.859 "is_configured": true, 00:17:01.859 "data_offset": 0, 00:17:01.859 "data_size": 65536 00:17:01.859 } 00:17:01.859 ] 00:17:01.859 }' 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.859 14:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.428 [2024-11-20 14:34:03.257687] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.428 [2024-11-20 14:34:03.257725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.428 [2024-11-20 14:34:03.257835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.428 [2024-11-20 14:34:03.257945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.428 [2024-11-20 14:34:03.257971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:02.428 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.429 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.429 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:02.687 /dev/nbd0 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.687 1+0 records in 00:17:02.687 1+0 records out 00:17:02.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332236 s, 12.3 MB/s 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.687 14:34:03 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:02.991 /dev/nbd1 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.991 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.249 1+0 records in 00:17:03.249 1+0 records out 00:17:03.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488192 s, 8.4 MB/s 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.249 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.507 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82038 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82038 ']' 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82038 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82038 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82038' 00:17:04.073 killing process with pid 82038 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82038 00:17:04.073 Received shutdown signal, test time was about 60.000000 seconds 00:17:04.073 00:17:04.073 Latency(us) 00:17:04.073 [2024-11-20T14:34:05.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.073 [2024-11-20T14:34:05.130Z] =================================================================================================================== 00:17:04.073 [2024-11-20T14:34:05.130Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.073 [2024-11-20 14:34:04.911124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:04.073 14:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82038 00:17:04.331 [2024-11-20 14:34:05.273732] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:05.706 00:17:05.706 real 0m16.445s 00:17:05.706 user 0m21.034s 00:17:05.706 sys 0m2.006s 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.706 ************************************ 00:17:05.706 END TEST raid5f_rebuild_test 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.706 ************************************ 00:17:05.706 14:34:06 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:05.706 14:34:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:05.706 14:34:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.706 14:34:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.706 ************************************ 00:17:05.706 START TEST raid5f_rebuild_test_sb 00:17:05.706 ************************************ 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82485 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82485 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82485 ']' 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.706 14:34:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.706 [2024-11-20 14:34:06.505472] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:17:05.706 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.706 Zero copy mechanism will not be used. 00:17:05.706 [2024-11-20 14:34:06.505648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82485 ] 00:17:05.706 [2024-11-20 14:34:06.682550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.965 [2024-11-20 14:34:06.819237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.224 [2024-11-20 14:34:07.032278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.224 [2024-11-20 14:34:07.032374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 BaseBdev1_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 [2024-11-20 14:34:07.598276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.791 [2024-11-20 14:34:07.598362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.791 [2024-11-20 14:34:07.598394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.791 [2024-11-20 14:34:07.598412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.791 [2024-11-20 14:34:07.601665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.791 [2024-11-20 14:34:07.601718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.791 BaseBdev1 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 BaseBdev2_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 [2024-11-20 14:34:07.648712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.791 [2024-11-20 14:34:07.648795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.791 [2024-11-20 14:34:07.648839] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.791 [2024-11-20 14:34:07.648864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.791 [2024-11-20 14:34:07.651732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.791 [2024-11-20 14:34:07.651797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.791 BaseBdev2 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 BaseBdev3_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 [2024-11-20 14:34:07.710076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:06.791 [2024-11-20 14:34:07.710163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.791 [2024-11-20 14:34:07.710197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.791 [2024-11-20 14:34:07.710216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.791 [2024-11-20 14:34:07.713253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.791 [2024-11-20 14:34:07.713313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.791 BaseBdev3 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 spare_malloc 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 spare_delay 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.791 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.791 [2024-11-20 14:34:07.771782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.791 [2024-11-20 14:34:07.771857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.791 [2024-11-20 14:34:07.771887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:06.791 [2024-11-20 14:34:07.771906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.792 [2024-11-20 14:34:07.774749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.792 [2024-11-20 14:34:07.774800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.792 spare 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.792 [2024-11-20 14:34:07.779892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.792 [2024-11-20 14:34:07.782787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.792 [2024-11-20 14:34:07.782889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.792 [2024-11-20 14:34:07.783145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:06.792 [2024-11-20 14:34:07.783174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:06.792 [2024-11-20 14:34:07.783516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:06.792 [2024-11-20 14:34:07.788756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:06.792 [2024-11-20 14:34:07.788796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:06.792 [2024-11-20 14:34:07.789050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.792 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.050 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.050 "name": "raid_bdev1", 00:17:07.050 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:07.050 "strip_size_kb": 64, 00:17:07.050 "state": "online", 00:17:07.050 "raid_level": "raid5f", 00:17:07.050 "superblock": true, 00:17:07.050 "num_base_bdevs": 3, 00:17:07.050 "num_base_bdevs_discovered": 3, 00:17:07.050 "num_base_bdevs_operational": 3, 00:17:07.050 "base_bdevs_list": [ 00:17:07.050 { 00:17:07.050 "name": "BaseBdev1", 00:17:07.050 "uuid": "fa9bfbc3-8281-5a23-b22d-6fc6ae4ee198", 00:17:07.050 "is_configured": true, 00:17:07.050 "data_offset": 2048, 00:17:07.050 "data_size": 63488 00:17:07.050 }, 00:17:07.050 { 00:17:07.050 "name": "BaseBdev2", 00:17:07.050 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:07.050 "is_configured": true, 00:17:07.050 "data_offset": 2048, 00:17:07.050 "data_size": 63488 00:17:07.050 }, 00:17:07.050 { 00:17:07.050 "name": "BaseBdev3", 00:17:07.050 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:07.050 "is_configured": true, 00:17:07.050 "data_offset": 2048, 00:17:07.050 "data_size": 63488 00:17:07.050 } 00:17:07.050 ] 00:17:07.050 }' 00:17:07.050 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.050 14:34:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.308 [2024-11-20 14:34:08.303715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.308 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.566 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:07.825 [2024-11-20 14:34:08.731643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:07.825 /dev/nbd0 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.825 1+0 records in 00:17:07.825 1+0 records out 00:17:07.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389192 s, 10.5 MB/s 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:07.825 14:34:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:08.391 496+0 records in 00:17:08.391 496+0 records out 00:17:08.391 65011712 bytes (65 MB, 62 MiB) copied, 0.458939 s, 142 MB/s 00:17:08.391 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:08.391 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.391 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:08.391 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.391 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:08.391 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.391 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.650 [2024-11-20 14:34:09.595562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.650 [2024-11-20 14:34:09.605421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.650 "name": "raid_bdev1", 00:17:08.650 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:08.650 "strip_size_kb": 64, 00:17:08.650 "state": "online", 00:17:08.650 "raid_level": "raid5f", 00:17:08.650 "superblock": true, 00:17:08.650 "num_base_bdevs": 3, 00:17:08.650 "num_base_bdevs_discovered": 2, 00:17:08.650 "num_base_bdevs_operational": 2, 00:17:08.650 "base_bdevs_list": [ 00:17:08.650 { 00:17:08.650 "name": null, 00:17:08.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.650 "is_configured": false, 00:17:08.650 "data_offset": 0, 00:17:08.650 "data_size": 63488 00:17:08.650 }, 00:17:08.650 { 00:17:08.650 "name": "BaseBdev2", 00:17:08.650 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:08.650 "is_configured": true, 00:17:08.650 "data_offset": 2048, 00:17:08.650 "data_size": 63488 00:17:08.650 }, 00:17:08.650 { 00:17:08.650 "name": "BaseBdev3", 00:17:08.650 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:08.650 "is_configured": true, 00:17:08.650 "data_offset": 2048, 00:17:08.650 "data_size": 63488 00:17:08.650 } 00:17:08.650 ] 00:17:08.650 }' 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.650 14:34:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.215 14:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.215 14:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.215 14:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.215 [2024-11-20 14:34:10.113615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.215 [2024-11-20 14:34:10.131233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:09.215 14:34:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.215 14:34:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:09.215 [2024-11-20 14:34:10.139310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.147 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.147 "name": "raid_bdev1", 00:17:10.147 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:10.147 "strip_size_kb": 64, 00:17:10.147 "state": "online", 00:17:10.147 "raid_level": "raid5f", 00:17:10.147 "superblock": true, 00:17:10.147 "num_base_bdevs": 3, 00:17:10.147 "num_base_bdevs_discovered": 3, 00:17:10.147 "num_base_bdevs_operational": 3, 00:17:10.147 "process": { 00:17:10.147 "type": "rebuild", 00:17:10.147 "target": "spare", 00:17:10.147 "progress": { 00:17:10.147 "blocks": 18432, 00:17:10.147 "percent": 14 00:17:10.147 } 00:17:10.147 }, 00:17:10.147 "base_bdevs_list": [ 00:17:10.148 { 00:17:10.148 "name": "spare", 00:17:10.148 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:10.148 "is_configured": true, 00:17:10.148 "data_offset": 2048, 00:17:10.148 "data_size": 63488 00:17:10.148 }, 00:17:10.148 { 00:17:10.148 "name": "BaseBdev2", 00:17:10.148 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:10.148 "is_configured": true, 00:17:10.148 "data_offset": 2048, 00:17:10.148 "data_size": 63488 00:17:10.148 }, 00:17:10.148 { 00:17:10.148 "name": "BaseBdev3", 00:17:10.148 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:10.148 "is_configured": true, 00:17:10.148 "data_offset": 2048, 00:17:10.148 "data_size": 63488 00:17:10.148 } 00:17:10.148 ] 00:17:10.148 }' 00:17:10.148 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.406 [2024-11-20 14:34:11.313508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.406 [2024-11-20 14:34:11.355057] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.406 [2024-11-20 14:34:11.355140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.406 [2024-11-20 14:34:11.355170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.406 [2024-11-20 14:34:11.355182] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.406 "name": "raid_bdev1", 00:17:10.406 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:10.406 "strip_size_kb": 64, 00:17:10.406 "state": "online", 00:17:10.406 "raid_level": "raid5f", 00:17:10.406 "superblock": true, 00:17:10.406 "num_base_bdevs": 3, 00:17:10.406 "num_base_bdevs_discovered": 2, 00:17:10.406 "num_base_bdevs_operational": 2, 00:17:10.406 "base_bdevs_list": [ 00:17:10.406 { 00:17:10.406 "name": null, 00:17:10.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.406 "is_configured": false, 00:17:10.406 "data_offset": 0, 00:17:10.406 "data_size": 63488 00:17:10.406 }, 00:17:10.406 { 00:17:10.406 "name": "BaseBdev2", 00:17:10.406 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:10.406 "is_configured": true, 00:17:10.406 "data_offset": 2048, 00:17:10.406 "data_size": 63488 00:17:10.406 }, 00:17:10.406 { 00:17:10.406 "name": "BaseBdev3", 00:17:10.406 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:10.406 "is_configured": true, 00:17:10.406 "data_offset": 2048, 00:17:10.406 "data_size": 63488 00:17:10.406 } 00:17:10.406 ] 00:17:10.406 }' 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.406 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.973 "name": "raid_bdev1", 00:17:10.973 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:10.973 "strip_size_kb": 64, 00:17:10.973 "state": "online", 00:17:10.973 "raid_level": "raid5f", 00:17:10.973 "superblock": true, 00:17:10.973 "num_base_bdevs": 3, 00:17:10.973 "num_base_bdevs_discovered": 2, 00:17:10.973 "num_base_bdevs_operational": 2, 00:17:10.973 "base_bdevs_list": [ 00:17:10.973 { 00:17:10.973 "name": null, 00:17:10.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.973 "is_configured": false, 00:17:10.973 "data_offset": 0, 00:17:10.973 "data_size": 63488 00:17:10.973 }, 00:17:10.973 { 00:17:10.973 "name": "BaseBdev2", 00:17:10.973 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:10.973 "is_configured": true, 00:17:10.973 "data_offset": 2048, 00:17:10.973 "data_size": 63488 00:17:10.973 }, 00:17:10.973 { 00:17:10.973 "name": "BaseBdev3", 00:17:10.973 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:10.973 "is_configured": true, 00:17:10.973 "data_offset": 2048, 00:17:10.973 "data_size": 63488 00:17:10.973 } 00:17:10.973 ] 00:17:10.973 }' 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.973 14:34:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.231 14:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.231 14:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:11.231 14:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.231 14:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.231 [2024-11-20 14:34:12.042507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.231 [2024-11-20 14:34:12.058111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:11.231 14:34:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.231 14:34:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:11.231 [2024-11-20 14:34:12.065837] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.162 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.162 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.163 "name": "raid_bdev1", 00:17:12.163 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:12.163 "strip_size_kb": 64, 00:17:12.163 "state": "online", 00:17:12.163 "raid_level": "raid5f", 00:17:12.163 "superblock": true, 00:17:12.163 "num_base_bdevs": 3, 00:17:12.163 "num_base_bdevs_discovered": 3, 00:17:12.163 "num_base_bdevs_operational": 3, 00:17:12.163 "process": { 00:17:12.163 "type": "rebuild", 00:17:12.163 "target": "spare", 00:17:12.163 "progress": { 00:17:12.163 "blocks": 18432, 00:17:12.163 "percent": 14 00:17:12.163 } 00:17:12.163 }, 00:17:12.163 "base_bdevs_list": [ 00:17:12.163 { 00:17:12.163 "name": "spare", 00:17:12.163 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:12.163 "is_configured": true, 00:17:12.163 "data_offset": 2048, 00:17:12.163 "data_size": 63488 00:17:12.163 }, 00:17:12.163 { 00:17:12.163 "name": "BaseBdev2", 00:17:12.163 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:12.163 "is_configured": true, 00:17:12.163 "data_offset": 2048, 00:17:12.163 "data_size": 63488 00:17:12.163 }, 00:17:12.163 { 00:17:12.163 "name": "BaseBdev3", 00:17:12.163 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:12.163 "is_configured": true, 00:17:12.163 "data_offset": 2048, 00:17:12.163 "data_size": 63488 00:17:12.163 } 00:17:12.163 ] 00:17:12.163 }' 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.163 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:12.420 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=615 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.420 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.420 "name": "raid_bdev1", 00:17:12.420 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:12.420 "strip_size_kb": 64, 00:17:12.420 "state": "online", 00:17:12.420 "raid_level": "raid5f", 00:17:12.420 "superblock": true, 00:17:12.420 "num_base_bdevs": 3, 00:17:12.420 "num_base_bdevs_discovered": 3, 00:17:12.420 "num_base_bdevs_operational": 3, 00:17:12.420 "process": { 00:17:12.421 "type": "rebuild", 00:17:12.421 "target": "spare", 00:17:12.421 "progress": { 00:17:12.421 "blocks": 22528, 00:17:12.421 "percent": 17 00:17:12.421 } 00:17:12.421 }, 00:17:12.421 "base_bdevs_list": [ 00:17:12.421 { 00:17:12.421 "name": "spare", 00:17:12.421 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:12.421 "is_configured": true, 00:17:12.421 "data_offset": 2048, 00:17:12.421 "data_size": 63488 00:17:12.421 }, 00:17:12.421 { 00:17:12.421 "name": "BaseBdev2", 00:17:12.421 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:12.421 "is_configured": true, 00:17:12.421 "data_offset": 2048, 00:17:12.421 "data_size": 63488 00:17:12.421 }, 00:17:12.421 { 00:17:12.421 "name": "BaseBdev3", 00:17:12.421 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:12.421 "is_configured": true, 00:17:12.421 "data_offset": 2048, 00:17:12.421 "data_size": 63488 00:17:12.421 } 00:17:12.421 ] 00:17:12.421 }' 00:17:12.421 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.421 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.421 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.421 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.421 14:34:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.386 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.386 "name": "raid_bdev1", 00:17:13.386 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:13.386 "strip_size_kb": 64, 00:17:13.387 "state": "online", 00:17:13.387 "raid_level": "raid5f", 00:17:13.387 "superblock": true, 00:17:13.387 "num_base_bdevs": 3, 00:17:13.387 "num_base_bdevs_discovered": 3, 00:17:13.387 "num_base_bdevs_operational": 3, 00:17:13.387 "process": { 00:17:13.387 "type": "rebuild", 00:17:13.387 "target": "spare", 00:17:13.387 "progress": { 00:17:13.387 "blocks": 45056, 00:17:13.387 "percent": 35 00:17:13.387 } 00:17:13.387 }, 00:17:13.387 "base_bdevs_list": [ 00:17:13.387 { 00:17:13.387 "name": "spare", 00:17:13.387 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:13.387 "is_configured": true, 00:17:13.387 "data_offset": 2048, 00:17:13.387 "data_size": 63488 00:17:13.387 }, 00:17:13.387 { 00:17:13.387 "name": "BaseBdev2", 00:17:13.387 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:13.387 "is_configured": true, 00:17:13.387 "data_offset": 2048, 00:17:13.387 "data_size": 63488 00:17:13.387 }, 00:17:13.387 { 00:17:13.387 "name": "BaseBdev3", 00:17:13.387 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:13.387 "is_configured": true, 00:17:13.387 "data_offset": 2048, 00:17:13.387 "data_size": 63488 00:17:13.387 } 00:17:13.387 ] 00:17:13.387 }' 00:17:13.387 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.643 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.643 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.643 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.643 14:34:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.573 "name": "raid_bdev1", 00:17:14.573 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:14.573 "strip_size_kb": 64, 00:17:14.573 "state": "online", 00:17:14.573 "raid_level": "raid5f", 00:17:14.573 "superblock": true, 00:17:14.573 "num_base_bdevs": 3, 00:17:14.573 "num_base_bdevs_discovered": 3, 00:17:14.573 "num_base_bdevs_operational": 3, 00:17:14.573 "process": { 00:17:14.573 "type": "rebuild", 00:17:14.573 "target": "spare", 00:17:14.573 "progress": { 00:17:14.573 "blocks": 69632, 00:17:14.573 "percent": 54 00:17:14.573 } 00:17:14.573 }, 00:17:14.573 "base_bdevs_list": [ 00:17:14.573 { 00:17:14.573 "name": "spare", 00:17:14.573 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:14.573 "is_configured": true, 00:17:14.573 "data_offset": 2048, 00:17:14.573 "data_size": 63488 00:17:14.573 }, 00:17:14.573 { 00:17:14.573 "name": "BaseBdev2", 00:17:14.573 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:14.573 "is_configured": true, 00:17:14.573 "data_offset": 2048, 00:17:14.573 "data_size": 63488 00:17:14.573 }, 00:17:14.573 { 00:17:14.573 "name": "BaseBdev3", 00:17:14.573 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:14.573 "is_configured": true, 00:17:14.573 "data_offset": 2048, 00:17:14.573 "data_size": 63488 00:17:14.573 } 00:17:14.573 ] 00:17:14.573 }' 00:17:14.573 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.831 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.831 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.831 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.831 14:34:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.763 "name": "raid_bdev1", 00:17:15.763 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:15.763 "strip_size_kb": 64, 00:17:15.763 "state": "online", 00:17:15.763 "raid_level": "raid5f", 00:17:15.763 "superblock": true, 00:17:15.763 "num_base_bdevs": 3, 00:17:15.763 "num_base_bdevs_discovered": 3, 00:17:15.763 "num_base_bdevs_operational": 3, 00:17:15.763 "process": { 00:17:15.763 "type": "rebuild", 00:17:15.763 "target": "spare", 00:17:15.763 "progress": { 00:17:15.763 "blocks": 92160, 00:17:15.763 "percent": 72 00:17:15.763 } 00:17:15.763 }, 00:17:15.763 "base_bdevs_list": [ 00:17:15.763 { 00:17:15.763 "name": "spare", 00:17:15.763 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:15.763 "is_configured": true, 00:17:15.763 "data_offset": 2048, 00:17:15.763 "data_size": 63488 00:17:15.763 }, 00:17:15.763 { 00:17:15.763 "name": "BaseBdev2", 00:17:15.763 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:15.763 "is_configured": true, 00:17:15.763 "data_offset": 2048, 00:17:15.763 "data_size": 63488 00:17:15.763 }, 00:17:15.763 { 00:17:15.763 "name": "BaseBdev3", 00:17:15.763 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:15.763 "is_configured": true, 00:17:15.763 "data_offset": 2048, 00:17:15.763 "data_size": 63488 00:17:15.763 } 00:17:15.763 ] 00:17:15.763 }' 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.763 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.022 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.022 14:34:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.957 "name": "raid_bdev1", 00:17:16.957 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:16.957 "strip_size_kb": 64, 00:17:16.957 "state": "online", 00:17:16.957 "raid_level": "raid5f", 00:17:16.957 "superblock": true, 00:17:16.957 "num_base_bdevs": 3, 00:17:16.957 "num_base_bdevs_discovered": 3, 00:17:16.957 "num_base_bdevs_operational": 3, 00:17:16.957 "process": { 00:17:16.957 "type": "rebuild", 00:17:16.957 "target": "spare", 00:17:16.957 "progress": { 00:17:16.957 "blocks": 116736, 00:17:16.957 "percent": 91 00:17:16.957 } 00:17:16.957 }, 00:17:16.957 "base_bdevs_list": [ 00:17:16.957 { 00:17:16.957 "name": "spare", 00:17:16.957 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:16.957 "is_configured": true, 00:17:16.957 "data_offset": 2048, 00:17:16.957 "data_size": 63488 00:17:16.957 }, 00:17:16.957 { 00:17:16.957 "name": "BaseBdev2", 00:17:16.957 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:16.957 "is_configured": true, 00:17:16.957 "data_offset": 2048, 00:17:16.957 "data_size": 63488 00:17:16.957 }, 00:17:16.957 { 00:17:16.957 "name": "BaseBdev3", 00:17:16.957 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:16.957 "is_configured": true, 00:17:16.957 "data_offset": 2048, 00:17:16.957 "data_size": 63488 00:17:16.957 } 00:17:16.957 ] 00:17:16.957 }' 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.957 14:34:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.216 14:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.216 14:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.473 [2024-11-20 14:34:18.335408] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:17.473 [2024-11-20 14:34:18.335507] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:17.473 [2024-11-20 14:34:18.335695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.039 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.302 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.303 "name": "raid_bdev1", 00:17:18.303 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:18.303 "strip_size_kb": 64, 00:17:18.303 "state": "online", 00:17:18.303 "raid_level": "raid5f", 00:17:18.303 "superblock": true, 00:17:18.303 "num_base_bdevs": 3, 00:17:18.303 "num_base_bdevs_discovered": 3, 00:17:18.303 "num_base_bdevs_operational": 3, 00:17:18.303 "base_bdevs_list": [ 00:17:18.303 { 00:17:18.303 "name": "spare", 00:17:18.303 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:18.303 "is_configured": true, 00:17:18.303 "data_offset": 2048, 00:17:18.303 "data_size": 63488 00:17:18.303 }, 00:17:18.303 { 00:17:18.303 "name": "BaseBdev2", 00:17:18.303 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:18.303 "is_configured": true, 00:17:18.303 "data_offset": 2048, 00:17:18.303 "data_size": 63488 00:17:18.303 }, 00:17:18.303 { 00:17:18.303 "name": "BaseBdev3", 00:17:18.303 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:18.303 "is_configured": true, 00:17:18.303 "data_offset": 2048, 00:17:18.303 "data_size": 63488 00:17:18.303 } 00:17:18.303 ] 00:17:18.303 }' 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.303 "name": "raid_bdev1", 00:17:18.303 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:18.303 "strip_size_kb": 64, 00:17:18.303 "state": "online", 00:17:18.303 "raid_level": "raid5f", 00:17:18.303 "superblock": true, 00:17:18.303 "num_base_bdevs": 3, 00:17:18.303 "num_base_bdevs_discovered": 3, 00:17:18.303 "num_base_bdevs_operational": 3, 00:17:18.303 "base_bdevs_list": [ 00:17:18.303 { 00:17:18.303 "name": "spare", 00:17:18.303 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:18.303 "is_configured": true, 00:17:18.303 "data_offset": 2048, 00:17:18.303 "data_size": 63488 00:17:18.303 }, 00:17:18.303 { 00:17:18.303 "name": "BaseBdev2", 00:17:18.303 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:18.303 "is_configured": true, 00:17:18.303 "data_offset": 2048, 00:17:18.303 "data_size": 63488 00:17:18.303 }, 00:17:18.303 { 00:17:18.303 "name": "BaseBdev3", 00:17:18.303 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:18.303 "is_configured": true, 00:17:18.303 "data_offset": 2048, 00:17:18.303 "data_size": 63488 00:17:18.303 } 00:17:18.303 ] 00:17:18.303 }' 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.303 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.561 "name": "raid_bdev1", 00:17:18.561 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:18.561 "strip_size_kb": 64, 00:17:18.561 "state": "online", 00:17:18.561 "raid_level": "raid5f", 00:17:18.561 "superblock": true, 00:17:18.561 "num_base_bdevs": 3, 00:17:18.561 "num_base_bdevs_discovered": 3, 00:17:18.561 "num_base_bdevs_operational": 3, 00:17:18.561 "base_bdevs_list": [ 00:17:18.561 { 00:17:18.561 "name": "spare", 00:17:18.561 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:18.561 "is_configured": true, 00:17:18.561 "data_offset": 2048, 00:17:18.561 "data_size": 63488 00:17:18.561 }, 00:17:18.561 { 00:17:18.561 "name": "BaseBdev2", 00:17:18.561 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:18.561 "is_configured": true, 00:17:18.561 "data_offset": 2048, 00:17:18.561 "data_size": 63488 00:17:18.561 }, 00:17:18.561 { 00:17:18.561 "name": "BaseBdev3", 00:17:18.561 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:18.561 "is_configured": true, 00:17:18.561 "data_offset": 2048, 00:17:18.561 "data_size": 63488 00:17:18.561 } 00:17:18.561 ] 00:17:18.561 }' 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.561 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.128 [2024-11-20 14:34:19.899844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.128 [2024-11-20 14:34:19.899882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.128 [2024-11-20 14:34:19.899992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.128 [2024-11-20 14:34:19.900141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.128 [2024-11-20 14:34:19.900163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.128 14:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:19.387 /dev/nbd0 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.387 1+0 records in 00:17:19.387 1+0 records out 00:17:19.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290001 s, 14.1 MB/s 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.387 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:19.747 /dev/nbd1 00:17:19.747 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:19.747 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:19.747 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:19.747 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:19.747 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:19.747 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:19.747 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.748 1+0 records in 00:17:19.748 1+0 records out 00:17:19.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324929 s, 12.6 MB/s 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.748 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:20.006 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:20.006 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:20.006 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.006 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.006 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:20.006 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.006 14:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.264 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.523 [2024-11-20 14:34:21.491094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.523 [2024-11-20 14:34:21.491176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.523 [2024-11-20 14:34:21.491204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:20.523 [2024-11-20 14:34:21.491221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.523 [2024-11-20 14:34:21.494262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.523 [2024-11-20 14:34:21.494312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.523 [2024-11-20 14:34:21.494425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:20.523 [2024-11-20 14:34:21.494504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.523 [2024-11-20 14:34:21.494689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.523 [2024-11-20 14:34:21.494832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.523 spare 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.523 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.782 [2024-11-20 14:34:21.594957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:20.782 [2024-11-20 14:34:21.595217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:20.782 [2024-11-20 14:34:21.595660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:20.782 [2024-11-20 14:34:21.600946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:20.782 [2024-11-20 14:34:21.600972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:20.782 [2024-11-20 14:34:21.601304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.782 "name": "raid_bdev1", 00:17:20.782 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:20.782 "strip_size_kb": 64, 00:17:20.782 "state": "online", 00:17:20.782 "raid_level": "raid5f", 00:17:20.782 "superblock": true, 00:17:20.782 "num_base_bdevs": 3, 00:17:20.782 "num_base_bdevs_discovered": 3, 00:17:20.782 "num_base_bdevs_operational": 3, 00:17:20.782 "base_bdevs_list": [ 00:17:20.782 { 00:17:20.782 "name": "spare", 00:17:20.782 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:20.782 "is_configured": true, 00:17:20.782 "data_offset": 2048, 00:17:20.782 "data_size": 63488 00:17:20.782 }, 00:17:20.782 { 00:17:20.782 "name": "BaseBdev2", 00:17:20.782 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:20.782 "is_configured": true, 00:17:20.782 "data_offset": 2048, 00:17:20.782 "data_size": 63488 00:17:20.782 }, 00:17:20.782 { 00:17:20.782 "name": "BaseBdev3", 00:17:20.782 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:20.782 "is_configured": true, 00:17:20.782 "data_offset": 2048, 00:17:20.782 "data_size": 63488 00:17:20.782 } 00:17:20.782 ] 00:17:20.782 }' 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.782 14:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.040 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.040 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.040 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.040 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.040 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.299 "name": "raid_bdev1", 00:17:21.299 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:21.299 "strip_size_kb": 64, 00:17:21.299 "state": "online", 00:17:21.299 "raid_level": "raid5f", 00:17:21.299 "superblock": true, 00:17:21.299 "num_base_bdevs": 3, 00:17:21.299 "num_base_bdevs_discovered": 3, 00:17:21.299 "num_base_bdevs_operational": 3, 00:17:21.299 "base_bdevs_list": [ 00:17:21.299 { 00:17:21.299 "name": "spare", 00:17:21.299 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:21.299 "is_configured": true, 00:17:21.299 "data_offset": 2048, 00:17:21.299 "data_size": 63488 00:17:21.299 }, 00:17:21.299 { 00:17:21.299 "name": "BaseBdev2", 00:17:21.299 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:21.299 "is_configured": true, 00:17:21.299 "data_offset": 2048, 00:17:21.299 "data_size": 63488 00:17:21.299 }, 00:17:21.299 { 00:17:21.299 "name": "BaseBdev3", 00:17:21.299 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:21.299 "is_configured": true, 00:17:21.299 "data_offset": 2048, 00:17:21.299 "data_size": 63488 00:17:21.299 } 00:17:21.299 ] 00:17:21.299 }' 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.299 [2024-11-20 14:34:22.331529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.299 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.557 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.557 "name": "raid_bdev1", 00:17:21.557 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:21.557 "strip_size_kb": 64, 00:17:21.557 "state": "online", 00:17:21.557 "raid_level": "raid5f", 00:17:21.557 "superblock": true, 00:17:21.557 "num_base_bdevs": 3, 00:17:21.557 "num_base_bdevs_discovered": 2, 00:17:21.557 "num_base_bdevs_operational": 2, 00:17:21.557 "base_bdevs_list": [ 00:17:21.557 { 00:17:21.557 "name": null, 00:17:21.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.557 "is_configured": false, 00:17:21.557 "data_offset": 0, 00:17:21.557 "data_size": 63488 00:17:21.557 }, 00:17:21.557 { 00:17:21.557 "name": "BaseBdev2", 00:17:21.557 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:21.557 "is_configured": true, 00:17:21.557 "data_offset": 2048, 00:17:21.557 "data_size": 63488 00:17:21.557 }, 00:17:21.557 { 00:17:21.557 "name": "BaseBdev3", 00:17:21.557 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:21.557 "is_configured": true, 00:17:21.557 "data_offset": 2048, 00:17:21.557 "data_size": 63488 00:17:21.557 } 00:17:21.557 ] 00:17:21.557 }' 00:17:21.557 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.557 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.815 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:21.815 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.815 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.073 [2024-11-20 14:34:22.871721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.074 [2024-11-20 14:34:22.871976] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.074 [2024-11-20 14:34:22.872004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:22.074 [2024-11-20 14:34:22.872055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.074 [2024-11-20 14:34:22.886786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:22.074 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.074 14:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:22.074 [2024-11-20 14:34:22.894532] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.007 "name": "raid_bdev1", 00:17:23.007 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:23.007 "strip_size_kb": 64, 00:17:23.007 "state": "online", 00:17:23.007 "raid_level": "raid5f", 00:17:23.007 "superblock": true, 00:17:23.007 "num_base_bdevs": 3, 00:17:23.007 "num_base_bdevs_discovered": 3, 00:17:23.007 "num_base_bdevs_operational": 3, 00:17:23.007 "process": { 00:17:23.007 "type": "rebuild", 00:17:23.007 "target": "spare", 00:17:23.007 "progress": { 00:17:23.007 "blocks": 18432, 00:17:23.007 "percent": 14 00:17:23.007 } 00:17:23.007 }, 00:17:23.007 "base_bdevs_list": [ 00:17:23.007 { 00:17:23.007 "name": "spare", 00:17:23.007 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:23.007 "is_configured": true, 00:17:23.007 "data_offset": 2048, 00:17:23.007 "data_size": 63488 00:17:23.007 }, 00:17:23.007 { 00:17:23.007 "name": "BaseBdev2", 00:17:23.007 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:23.007 "is_configured": true, 00:17:23.007 "data_offset": 2048, 00:17:23.007 "data_size": 63488 00:17:23.007 }, 00:17:23.007 { 00:17:23.007 "name": "BaseBdev3", 00:17:23.007 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:23.007 "is_configured": true, 00:17:23.007 "data_offset": 2048, 00:17:23.007 "data_size": 63488 00:17:23.007 } 00:17:23.007 ] 00:17:23.007 }' 00:17:23.007 14:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.007 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.007 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.007 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.007 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.007 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.007 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.266 [2024-11-20 14:34:24.064446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.266 [2024-11-20 14:34:24.109890] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:23.266 [2024-11-20 14:34:24.110007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.266 [2024-11-20 14:34:24.110033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.266 [2024-11-20 14:34:24.110054] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.266 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.267 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.267 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.267 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.267 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.267 "name": "raid_bdev1", 00:17:23.267 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:23.267 "strip_size_kb": 64, 00:17:23.267 "state": "online", 00:17:23.267 "raid_level": "raid5f", 00:17:23.267 "superblock": true, 00:17:23.267 "num_base_bdevs": 3, 00:17:23.267 "num_base_bdevs_discovered": 2, 00:17:23.267 "num_base_bdevs_operational": 2, 00:17:23.267 "base_bdevs_list": [ 00:17:23.267 { 00:17:23.267 "name": null, 00:17:23.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.267 "is_configured": false, 00:17:23.267 "data_offset": 0, 00:17:23.267 "data_size": 63488 00:17:23.267 }, 00:17:23.267 { 00:17:23.267 "name": "BaseBdev2", 00:17:23.267 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:23.267 "is_configured": true, 00:17:23.267 "data_offset": 2048, 00:17:23.267 "data_size": 63488 00:17:23.267 }, 00:17:23.267 { 00:17:23.267 "name": "BaseBdev3", 00:17:23.267 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:23.267 "is_configured": true, 00:17:23.267 "data_offset": 2048, 00:17:23.267 "data_size": 63488 00:17:23.267 } 00:17:23.267 ] 00:17:23.267 }' 00:17:23.267 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.267 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.833 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.833 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.833 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.833 [2024-11-20 14:34:24.661342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.833 [2024-11-20 14:34:24.661556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.833 [2024-11-20 14:34:24.661600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:23.833 [2024-11-20 14:34:24.661636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.833 [2024-11-20 14:34:24.662287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.833 [2024-11-20 14:34:24.662324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.833 [2024-11-20 14:34:24.662452] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.833 [2024-11-20 14:34:24.662480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.833 [2024-11-20 14:34:24.662494] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:23.833 [2024-11-20 14:34:24.662537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.833 [2024-11-20 14:34:24.677056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:23.833 spare 00:17:23.833 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.833 14:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:23.833 [2024-11-20 14:34:24.684312] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.766 "name": "raid_bdev1", 00:17:24.766 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:24.766 "strip_size_kb": 64, 00:17:24.766 "state": "online", 00:17:24.766 "raid_level": "raid5f", 00:17:24.766 "superblock": true, 00:17:24.766 "num_base_bdevs": 3, 00:17:24.766 "num_base_bdevs_discovered": 3, 00:17:24.766 "num_base_bdevs_operational": 3, 00:17:24.766 "process": { 00:17:24.766 "type": "rebuild", 00:17:24.766 "target": "spare", 00:17:24.766 "progress": { 00:17:24.766 "blocks": 18432, 00:17:24.766 "percent": 14 00:17:24.766 } 00:17:24.766 }, 00:17:24.766 "base_bdevs_list": [ 00:17:24.766 { 00:17:24.766 "name": "spare", 00:17:24.766 "uuid": "11fb1ea8-f9b1-5972-86c3-4008b6292a15", 00:17:24.766 "is_configured": true, 00:17:24.766 "data_offset": 2048, 00:17:24.766 "data_size": 63488 00:17:24.766 }, 00:17:24.766 { 00:17:24.766 "name": "BaseBdev2", 00:17:24.766 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:24.766 "is_configured": true, 00:17:24.766 "data_offset": 2048, 00:17:24.766 "data_size": 63488 00:17:24.766 }, 00:17:24.766 { 00:17:24.766 "name": "BaseBdev3", 00:17:24.766 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:24.766 "is_configured": true, 00:17:24.766 "data_offset": 2048, 00:17:24.766 "data_size": 63488 00:17:24.766 } 00:17:24.766 ] 00:17:24.766 }' 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.766 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.025 [2024-11-20 14:34:25.847148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.025 [2024-11-20 14:34:25.899418] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:25.025 [2024-11-20 14:34:25.899504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.025 [2024-11-20 14:34:25.899532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.025 [2024-11-20 14:34:25.899544] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.025 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.025 "name": "raid_bdev1", 00:17:25.025 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:25.025 "strip_size_kb": 64, 00:17:25.025 "state": "online", 00:17:25.025 "raid_level": "raid5f", 00:17:25.026 "superblock": true, 00:17:25.026 "num_base_bdevs": 3, 00:17:25.026 "num_base_bdevs_discovered": 2, 00:17:25.026 "num_base_bdevs_operational": 2, 00:17:25.026 "base_bdevs_list": [ 00:17:25.026 { 00:17:25.026 "name": null, 00:17:25.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.026 "is_configured": false, 00:17:25.026 "data_offset": 0, 00:17:25.026 "data_size": 63488 00:17:25.026 }, 00:17:25.026 { 00:17:25.026 "name": "BaseBdev2", 00:17:25.026 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:25.026 "is_configured": true, 00:17:25.026 "data_offset": 2048, 00:17:25.026 "data_size": 63488 00:17:25.026 }, 00:17:25.026 { 00:17:25.026 "name": "BaseBdev3", 00:17:25.026 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:25.026 "is_configured": true, 00:17:25.026 "data_offset": 2048, 00:17:25.026 "data_size": 63488 00:17:25.026 } 00:17:25.026 ] 00:17:25.026 }' 00:17:25.026 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.026 14:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.591 "name": "raid_bdev1", 00:17:25.591 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:25.591 "strip_size_kb": 64, 00:17:25.591 "state": "online", 00:17:25.591 "raid_level": "raid5f", 00:17:25.591 "superblock": true, 00:17:25.591 "num_base_bdevs": 3, 00:17:25.591 "num_base_bdevs_discovered": 2, 00:17:25.591 "num_base_bdevs_operational": 2, 00:17:25.591 "base_bdevs_list": [ 00:17:25.591 { 00:17:25.591 "name": null, 00:17:25.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.591 "is_configured": false, 00:17:25.591 "data_offset": 0, 00:17:25.591 "data_size": 63488 00:17:25.591 }, 00:17:25.591 { 00:17:25.591 "name": "BaseBdev2", 00:17:25.591 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:25.591 "is_configured": true, 00:17:25.591 "data_offset": 2048, 00:17:25.591 "data_size": 63488 00:17:25.591 }, 00:17:25.591 { 00:17:25.591 "name": "BaseBdev3", 00:17:25.591 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:25.591 "is_configured": true, 00:17:25.591 "data_offset": 2048, 00:17:25.591 "data_size": 63488 00:17:25.591 } 00:17:25.591 ] 00:17:25.591 }' 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.591 [2024-11-20 14:34:26.618681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.591 [2024-11-20 14:34:26.618749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.591 [2024-11-20 14:34:26.618788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:25.591 [2024-11-20 14:34:26.618804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.591 [2024-11-20 14:34:26.619398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.591 [2024-11-20 14:34:26.619432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.591 [2024-11-20 14:34:26.619576] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:25.591 [2024-11-20 14:34:26.619609] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.591 [2024-11-20 14:34:26.619666] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.591 [2024-11-20 14:34:26.619687] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:25.591 BaseBdev1 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.591 14:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.965 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.965 "name": "raid_bdev1", 00:17:26.965 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:26.965 "strip_size_kb": 64, 00:17:26.965 "state": "online", 00:17:26.965 "raid_level": "raid5f", 00:17:26.965 "superblock": true, 00:17:26.965 "num_base_bdevs": 3, 00:17:26.965 "num_base_bdevs_discovered": 2, 00:17:26.965 "num_base_bdevs_operational": 2, 00:17:26.965 "base_bdevs_list": [ 00:17:26.965 { 00:17:26.965 "name": null, 00:17:26.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.965 "is_configured": false, 00:17:26.965 "data_offset": 0, 00:17:26.965 "data_size": 63488 00:17:26.965 }, 00:17:26.965 { 00:17:26.965 "name": "BaseBdev2", 00:17:26.965 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:26.965 "is_configured": true, 00:17:26.965 "data_offset": 2048, 00:17:26.965 "data_size": 63488 00:17:26.965 }, 00:17:26.965 { 00:17:26.965 "name": "BaseBdev3", 00:17:26.966 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:26.966 "is_configured": true, 00:17:26.966 "data_offset": 2048, 00:17:26.966 "data_size": 63488 00:17:26.966 } 00:17:26.966 ] 00:17:26.966 }' 00:17:26.966 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.966 14:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.223 "name": "raid_bdev1", 00:17:27.223 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:27.223 "strip_size_kb": 64, 00:17:27.223 "state": "online", 00:17:27.223 "raid_level": "raid5f", 00:17:27.223 "superblock": true, 00:17:27.223 "num_base_bdevs": 3, 00:17:27.223 "num_base_bdevs_discovered": 2, 00:17:27.223 "num_base_bdevs_operational": 2, 00:17:27.223 "base_bdevs_list": [ 00:17:27.223 { 00:17:27.223 "name": null, 00:17:27.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.223 "is_configured": false, 00:17:27.223 "data_offset": 0, 00:17:27.223 "data_size": 63488 00:17:27.223 }, 00:17:27.223 { 00:17:27.223 "name": "BaseBdev2", 00:17:27.223 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:27.223 "is_configured": true, 00:17:27.223 "data_offset": 2048, 00:17:27.223 "data_size": 63488 00:17:27.223 }, 00:17:27.223 { 00:17:27.223 "name": "BaseBdev3", 00:17:27.223 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:27.223 "is_configured": true, 00:17:27.223 "data_offset": 2048, 00:17:27.223 "data_size": 63488 00:17:27.223 } 00:17:27.223 ] 00:17:27.223 }' 00:17:27.223 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.481 [2024-11-20 14:34:28.343334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.481 [2024-11-20 14:34:28.343545] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.481 [2024-11-20 14:34:28.343567] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:27.481 request: 00:17:27.481 { 00:17:27.481 "base_bdev": "BaseBdev1", 00:17:27.481 "raid_bdev": "raid_bdev1", 00:17:27.481 "method": "bdev_raid_add_base_bdev", 00:17:27.481 "req_id": 1 00:17:27.481 } 00:17:27.481 Got JSON-RPC error response 00:17:27.481 response: 00:17:27.481 { 00:17:27.481 "code": -22, 00:17:27.481 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:27.481 } 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.481 14:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.416 "name": "raid_bdev1", 00:17:28.416 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:28.416 "strip_size_kb": 64, 00:17:28.416 "state": "online", 00:17:28.416 "raid_level": "raid5f", 00:17:28.416 "superblock": true, 00:17:28.416 "num_base_bdevs": 3, 00:17:28.416 "num_base_bdevs_discovered": 2, 00:17:28.416 "num_base_bdevs_operational": 2, 00:17:28.416 "base_bdevs_list": [ 00:17:28.416 { 00:17:28.416 "name": null, 00:17:28.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.416 "is_configured": false, 00:17:28.416 "data_offset": 0, 00:17:28.416 "data_size": 63488 00:17:28.416 }, 00:17:28.416 { 00:17:28.416 "name": "BaseBdev2", 00:17:28.416 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:28.416 "is_configured": true, 00:17:28.416 "data_offset": 2048, 00:17:28.416 "data_size": 63488 00:17:28.416 }, 00:17:28.416 { 00:17:28.416 "name": "BaseBdev3", 00:17:28.416 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:28.416 "is_configured": true, 00:17:28.416 "data_offset": 2048, 00:17:28.416 "data_size": 63488 00:17:28.416 } 00:17:28.416 ] 00:17:28.416 }' 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.416 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.983 "name": "raid_bdev1", 00:17:28.983 "uuid": "57ec360e-3cec-4c8c-b0de-0033c5a86d6b", 00:17:28.983 "strip_size_kb": 64, 00:17:28.983 "state": "online", 00:17:28.983 "raid_level": "raid5f", 00:17:28.983 "superblock": true, 00:17:28.983 "num_base_bdevs": 3, 00:17:28.983 "num_base_bdevs_discovered": 2, 00:17:28.983 "num_base_bdevs_operational": 2, 00:17:28.983 "base_bdevs_list": [ 00:17:28.983 { 00:17:28.983 "name": null, 00:17:28.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.983 "is_configured": false, 00:17:28.983 "data_offset": 0, 00:17:28.983 "data_size": 63488 00:17:28.983 }, 00:17:28.983 { 00:17:28.983 "name": "BaseBdev2", 00:17:28.983 "uuid": "a18dc173-3869-5af9-a35f-3b9a76115400", 00:17:28.983 "is_configured": true, 00:17:28.983 "data_offset": 2048, 00:17:28.983 "data_size": 63488 00:17:28.983 }, 00:17:28.983 { 00:17:28.983 "name": "BaseBdev3", 00:17:28.983 "uuid": "be7e8846-7ebd-5062-9430-88eb9351674b", 00:17:28.983 "is_configured": true, 00:17:28.983 "data_offset": 2048, 00:17:28.983 "data_size": 63488 00:17:28.983 } 00:17:28.983 ] 00:17:28.983 }' 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.983 14:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.983 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.983 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82485 00:17:28.983 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82485 ']' 00:17:28.983 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82485 00:17:28.983 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:29.241 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.241 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82485 00:17:29.241 killing process with pid 82485 00:17:29.241 Received shutdown signal, test time was about 60.000000 seconds 00:17:29.241 00:17:29.241 Latency(us) 00:17:29.241 [2024-11-20T14:34:30.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.241 [2024-11-20T14:34:30.298Z] =================================================================================================================== 00:17:29.241 [2024-11-20T14:34:30.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.241 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.241 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.241 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82485' 00:17:29.241 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82485 00:17:29.241 [2024-11-20 14:34:30.070436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.241 14:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82485 00:17:29.241 [2024-11-20 14:34:30.070603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.241 [2024-11-20 14:34:30.070708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.241 [2024-11-20 14:34:30.070730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:29.500 [2024-11-20 14:34:30.441448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.872 14:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:30.872 00:17:30.872 real 0m25.215s 00:17:30.872 user 0m33.652s 00:17:30.872 sys 0m2.676s 00:17:30.872 ************************************ 00:17:30.872 END TEST raid5f_rebuild_test_sb 00:17:30.872 ************************************ 00:17:30.872 14:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.872 14:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.872 14:34:31 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:30.872 14:34:31 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:30.872 14:34:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:30.873 14:34:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.873 14:34:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.873 ************************************ 00:17:30.873 START TEST raid5f_state_function_test 00:17:30.873 ************************************ 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83245 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:30.873 Process raid pid: 83245 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83245' 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83245 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83245 ']' 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.873 14:34:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.873 [2024-11-20 14:34:31.796348] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:17:30.873 [2024-11-20 14:34:31.796766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.137 [2024-11-20 14:34:31.986544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.137 [2024-11-20 14:34:32.142889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.402 [2024-11-20 14:34:32.374126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.402 [2024-11-20 14:34:32.374248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.968 [2024-11-20 14:34:32.799827] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.968 [2024-11-20 14:34:32.799935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.968 [2024-11-20 14:34:32.799958] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.968 [2024-11-20 14:34:32.799978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.968 [2024-11-20 14:34:32.799991] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:31.968 [2024-11-20 14:34:32.800017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:31.968 [2024-11-20 14:34:32.800030] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:31.968 [2024-11-20 14:34:32.800047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.968 "name": "Existed_Raid", 00:17:31.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.968 "strip_size_kb": 64, 00:17:31.968 "state": "configuring", 00:17:31.968 "raid_level": "raid5f", 00:17:31.968 "superblock": false, 00:17:31.968 "num_base_bdevs": 4, 00:17:31.968 "num_base_bdevs_discovered": 0, 00:17:31.968 "num_base_bdevs_operational": 4, 00:17:31.968 "base_bdevs_list": [ 00:17:31.968 { 00:17:31.968 "name": "BaseBdev1", 00:17:31.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.968 "is_configured": false, 00:17:31.968 "data_offset": 0, 00:17:31.968 "data_size": 0 00:17:31.968 }, 00:17:31.968 { 00:17:31.968 "name": "BaseBdev2", 00:17:31.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.968 "is_configured": false, 00:17:31.968 "data_offset": 0, 00:17:31.968 "data_size": 0 00:17:31.968 }, 00:17:31.968 { 00:17:31.968 "name": "BaseBdev3", 00:17:31.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.968 "is_configured": false, 00:17:31.968 "data_offset": 0, 00:17:31.968 "data_size": 0 00:17:31.968 }, 00:17:31.968 { 00:17:31.968 "name": "BaseBdev4", 00:17:31.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.968 "is_configured": false, 00:17:31.968 "data_offset": 0, 00:17:31.968 "data_size": 0 00:17:31.968 } 00:17:31.968 ] 00:17:31.968 }' 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.968 14:34:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.534 [2024-11-20 14:34:33.339981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.534 [2024-11-20 14:34:33.340099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.534 [2024-11-20 14:34:33.347901] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.534 [2024-11-20 14:34:33.347968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.534 [2024-11-20 14:34:33.347987] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.534 [2024-11-20 14:34:33.348007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.534 [2024-11-20 14:34:33.348039] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.534 [2024-11-20 14:34:33.348073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.534 [2024-11-20 14:34:33.348085] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:32.534 [2024-11-20 14:34:33.348101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.534 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.535 [2024-11-20 14:34:33.398274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.535 BaseBdev1 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.535 [ 00:17:32.535 { 00:17:32.535 "name": "BaseBdev1", 00:17:32.535 "aliases": [ 00:17:32.535 "a839a1ba-4535-485e-99de-ecf6865e5220" 00:17:32.535 ], 00:17:32.535 "product_name": "Malloc disk", 00:17:32.535 "block_size": 512, 00:17:32.535 "num_blocks": 65536, 00:17:32.535 "uuid": "a839a1ba-4535-485e-99de-ecf6865e5220", 00:17:32.535 "assigned_rate_limits": { 00:17:32.535 "rw_ios_per_sec": 0, 00:17:32.535 "rw_mbytes_per_sec": 0, 00:17:32.535 "r_mbytes_per_sec": 0, 00:17:32.535 "w_mbytes_per_sec": 0 00:17:32.535 }, 00:17:32.535 "claimed": true, 00:17:32.535 "claim_type": "exclusive_write", 00:17:32.535 "zoned": false, 00:17:32.535 "supported_io_types": { 00:17:32.535 "read": true, 00:17:32.535 "write": true, 00:17:32.535 "unmap": true, 00:17:32.535 "flush": true, 00:17:32.535 "reset": true, 00:17:32.535 "nvme_admin": false, 00:17:32.535 "nvme_io": false, 00:17:32.535 "nvme_io_md": false, 00:17:32.535 "write_zeroes": true, 00:17:32.535 "zcopy": true, 00:17:32.535 "get_zone_info": false, 00:17:32.535 "zone_management": false, 00:17:32.535 "zone_append": false, 00:17:32.535 "compare": false, 00:17:32.535 "compare_and_write": false, 00:17:32.535 "abort": true, 00:17:32.535 "seek_hole": false, 00:17:32.535 "seek_data": false, 00:17:32.535 "copy": true, 00:17:32.535 "nvme_iov_md": false 00:17:32.535 }, 00:17:32.535 "memory_domains": [ 00:17:32.535 { 00:17:32.535 "dma_device_id": "system", 00:17:32.535 "dma_device_type": 1 00:17:32.535 }, 00:17:32.535 { 00:17:32.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.535 "dma_device_type": 2 00:17:32.535 } 00:17:32.535 ], 00:17:32.535 "driver_specific": {} 00:17:32.535 } 00:17:32.535 ] 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.535 "name": "Existed_Raid", 00:17:32.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.535 "strip_size_kb": 64, 00:17:32.535 "state": "configuring", 00:17:32.535 "raid_level": "raid5f", 00:17:32.535 "superblock": false, 00:17:32.535 "num_base_bdevs": 4, 00:17:32.535 "num_base_bdevs_discovered": 1, 00:17:32.535 "num_base_bdevs_operational": 4, 00:17:32.535 "base_bdevs_list": [ 00:17:32.535 { 00:17:32.535 "name": "BaseBdev1", 00:17:32.535 "uuid": "a839a1ba-4535-485e-99de-ecf6865e5220", 00:17:32.535 "is_configured": true, 00:17:32.535 "data_offset": 0, 00:17:32.535 "data_size": 65536 00:17:32.535 }, 00:17:32.535 { 00:17:32.535 "name": "BaseBdev2", 00:17:32.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.535 "is_configured": false, 00:17:32.535 "data_offset": 0, 00:17:32.535 "data_size": 0 00:17:32.535 }, 00:17:32.535 { 00:17:32.535 "name": "BaseBdev3", 00:17:32.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.535 "is_configured": false, 00:17:32.535 "data_offset": 0, 00:17:32.535 "data_size": 0 00:17:32.535 }, 00:17:32.535 { 00:17:32.535 "name": "BaseBdev4", 00:17:32.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.535 "is_configured": false, 00:17:32.535 "data_offset": 0, 00:17:32.535 "data_size": 0 00:17:32.535 } 00:17:32.535 ] 00:17:32.535 }' 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.535 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.101 [2024-11-20 14:34:33.938426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.101 [2024-11-20 14:34:33.938700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.101 [2024-11-20 14:34:33.946499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.101 [2024-11-20 14:34:33.949308] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.101 [2024-11-20 14:34:33.949371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.101 [2024-11-20 14:34:33.949403] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.101 [2024-11-20 14:34:33.949424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.101 [2024-11-20 14:34:33.949437] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:33.101 [2024-11-20 14:34:33.949454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.101 "name": "Existed_Raid", 00:17:33.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.101 "strip_size_kb": 64, 00:17:33.101 "state": "configuring", 00:17:33.101 "raid_level": "raid5f", 00:17:33.101 "superblock": false, 00:17:33.101 "num_base_bdevs": 4, 00:17:33.101 "num_base_bdevs_discovered": 1, 00:17:33.101 "num_base_bdevs_operational": 4, 00:17:33.101 "base_bdevs_list": [ 00:17:33.101 { 00:17:33.101 "name": "BaseBdev1", 00:17:33.101 "uuid": "a839a1ba-4535-485e-99de-ecf6865e5220", 00:17:33.101 "is_configured": true, 00:17:33.101 "data_offset": 0, 00:17:33.101 "data_size": 65536 00:17:33.101 }, 00:17:33.101 { 00:17:33.101 "name": "BaseBdev2", 00:17:33.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.101 "is_configured": false, 00:17:33.101 "data_offset": 0, 00:17:33.101 "data_size": 0 00:17:33.101 }, 00:17:33.101 { 00:17:33.101 "name": "BaseBdev3", 00:17:33.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.101 "is_configured": false, 00:17:33.101 "data_offset": 0, 00:17:33.101 "data_size": 0 00:17:33.101 }, 00:17:33.101 { 00:17:33.101 "name": "BaseBdev4", 00:17:33.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.101 "is_configured": false, 00:17:33.101 "data_offset": 0, 00:17:33.101 "data_size": 0 00:17:33.101 } 00:17:33.101 ] 00:17:33.101 }' 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.101 14:34:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.667 [2024-11-20 14:34:34.497175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.667 BaseBdev2 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.667 [ 00:17:33.667 { 00:17:33.667 "name": "BaseBdev2", 00:17:33.667 "aliases": [ 00:17:33.667 "7ebe95bd-9386-4f29-b224-c2291219e047" 00:17:33.667 ], 00:17:33.667 "product_name": "Malloc disk", 00:17:33.667 "block_size": 512, 00:17:33.667 "num_blocks": 65536, 00:17:33.667 "uuid": "7ebe95bd-9386-4f29-b224-c2291219e047", 00:17:33.667 "assigned_rate_limits": { 00:17:33.667 "rw_ios_per_sec": 0, 00:17:33.667 "rw_mbytes_per_sec": 0, 00:17:33.667 "r_mbytes_per_sec": 0, 00:17:33.667 "w_mbytes_per_sec": 0 00:17:33.667 }, 00:17:33.667 "claimed": true, 00:17:33.667 "claim_type": "exclusive_write", 00:17:33.667 "zoned": false, 00:17:33.667 "supported_io_types": { 00:17:33.667 "read": true, 00:17:33.667 "write": true, 00:17:33.667 "unmap": true, 00:17:33.667 "flush": true, 00:17:33.667 "reset": true, 00:17:33.667 "nvme_admin": false, 00:17:33.667 "nvme_io": false, 00:17:33.667 "nvme_io_md": false, 00:17:33.667 "write_zeroes": true, 00:17:33.667 "zcopy": true, 00:17:33.667 "get_zone_info": false, 00:17:33.667 "zone_management": false, 00:17:33.667 "zone_append": false, 00:17:33.667 "compare": false, 00:17:33.667 "compare_and_write": false, 00:17:33.667 "abort": true, 00:17:33.667 "seek_hole": false, 00:17:33.667 "seek_data": false, 00:17:33.667 "copy": true, 00:17:33.667 "nvme_iov_md": false 00:17:33.667 }, 00:17:33.667 "memory_domains": [ 00:17:33.667 { 00:17:33.667 "dma_device_id": "system", 00:17:33.667 "dma_device_type": 1 00:17:33.667 }, 00:17:33.667 { 00:17:33.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.667 "dma_device_type": 2 00:17:33.667 } 00:17:33.667 ], 00:17:33.667 "driver_specific": {} 00:17:33.667 } 00:17:33.667 ] 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.667 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.667 "name": "Existed_Raid", 00:17:33.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.667 "strip_size_kb": 64, 00:17:33.667 "state": "configuring", 00:17:33.667 "raid_level": "raid5f", 00:17:33.667 "superblock": false, 00:17:33.668 "num_base_bdevs": 4, 00:17:33.668 "num_base_bdevs_discovered": 2, 00:17:33.668 "num_base_bdevs_operational": 4, 00:17:33.668 "base_bdevs_list": [ 00:17:33.668 { 00:17:33.668 "name": "BaseBdev1", 00:17:33.668 "uuid": "a839a1ba-4535-485e-99de-ecf6865e5220", 00:17:33.668 "is_configured": true, 00:17:33.668 "data_offset": 0, 00:17:33.668 "data_size": 65536 00:17:33.668 }, 00:17:33.668 { 00:17:33.668 "name": "BaseBdev2", 00:17:33.668 "uuid": "7ebe95bd-9386-4f29-b224-c2291219e047", 00:17:33.668 "is_configured": true, 00:17:33.668 "data_offset": 0, 00:17:33.668 "data_size": 65536 00:17:33.668 }, 00:17:33.668 { 00:17:33.668 "name": "BaseBdev3", 00:17:33.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.668 "is_configured": false, 00:17:33.668 "data_offset": 0, 00:17:33.668 "data_size": 0 00:17:33.668 }, 00:17:33.668 { 00:17:33.668 "name": "BaseBdev4", 00:17:33.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.668 "is_configured": false, 00:17:33.668 "data_offset": 0, 00:17:33.668 "data_size": 0 00:17:33.668 } 00:17:33.668 ] 00:17:33.668 }' 00:17:33.668 14:34:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.668 14:34:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 [2024-11-20 14:34:35.108407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.233 BaseBdev3 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 [ 00:17:34.233 { 00:17:34.233 "name": "BaseBdev3", 00:17:34.233 "aliases": [ 00:17:34.233 "08895449-5925-4b07-8d83-5f57307e2e5d" 00:17:34.233 ], 00:17:34.233 "product_name": "Malloc disk", 00:17:34.233 "block_size": 512, 00:17:34.233 "num_blocks": 65536, 00:17:34.233 "uuid": "08895449-5925-4b07-8d83-5f57307e2e5d", 00:17:34.233 "assigned_rate_limits": { 00:17:34.233 "rw_ios_per_sec": 0, 00:17:34.233 "rw_mbytes_per_sec": 0, 00:17:34.233 "r_mbytes_per_sec": 0, 00:17:34.233 "w_mbytes_per_sec": 0 00:17:34.233 }, 00:17:34.233 "claimed": true, 00:17:34.233 "claim_type": "exclusive_write", 00:17:34.233 "zoned": false, 00:17:34.233 "supported_io_types": { 00:17:34.233 "read": true, 00:17:34.233 "write": true, 00:17:34.233 "unmap": true, 00:17:34.233 "flush": true, 00:17:34.233 "reset": true, 00:17:34.233 "nvme_admin": false, 00:17:34.233 "nvme_io": false, 00:17:34.233 "nvme_io_md": false, 00:17:34.233 "write_zeroes": true, 00:17:34.233 "zcopy": true, 00:17:34.233 "get_zone_info": false, 00:17:34.233 "zone_management": false, 00:17:34.233 "zone_append": false, 00:17:34.233 "compare": false, 00:17:34.233 "compare_and_write": false, 00:17:34.233 "abort": true, 00:17:34.233 "seek_hole": false, 00:17:34.233 "seek_data": false, 00:17:34.233 "copy": true, 00:17:34.233 "nvme_iov_md": false 00:17:34.233 }, 00:17:34.233 "memory_domains": [ 00:17:34.233 { 00:17:34.233 "dma_device_id": "system", 00:17:34.233 "dma_device_type": 1 00:17:34.233 }, 00:17:34.233 { 00:17:34.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.233 "dma_device_type": 2 00:17:34.233 } 00:17:34.233 ], 00:17:34.233 "driver_specific": {} 00:17:34.233 } 00:17:34.233 ] 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.233 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.233 "name": "Existed_Raid", 00:17:34.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.233 "strip_size_kb": 64, 00:17:34.233 "state": "configuring", 00:17:34.233 "raid_level": "raid5f", 00:17:34.233 "superblock": false, 00:17:34.233 "num_base_bdevs": 4, 00:17:34.233 "num_base_bdevs_discovered": 3, 00:17:34.233 "num_base_bdevs_operational": 4, 00:17:34.233 "base_bdevs_list": [ 00:17:34.233 { 00:17:34.233 "name": "BaseBdev1", 00:17:34.233 "uuid": "a839a1ba-4535-485e-99de-ecf6865e5220", 00:17:34.233 "is_configured": true, 00:17:34.233 "data_offset": 0, 00:17:34.233 "data_size": 65536 00:17:34.233 }, 00:17:34.233 { 00:17:34.233 "name": "BaseBdev2", 00:17:34.233 "uuid": "7ebe95bd-9386-4f29-b224-c2291219e047", 00:17:34.233 "is_configured": true, 00:17:34.233 "data_offset": 0, 00:17:34.233 "data_size": 65536 00:17:34.233 }, 00:17:34.233 { 00:17:34.233 "name": "BaseBdev3", 00:17:34.233 "uuid": "08895449-5925-4b07-8d83-5f57307e2e5d", 00:17:34.233 "is_configured": true, 00:17:34.233 "data_offset": 0, 00:17:34.233 "data_size": 65536 00:17:34.233 }, 00:17:34.233 { 00:17:34.233 "name": "BaseBdev4", 00:17:34.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.234 "is_configured": false, 00:17:34.234 "data_offset": 0, 00:17:34.234 "data_size": 0 00:17:34.234 } 00:17:34.234 ] 00:17:34.234 }' 00:17:34.234 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.234 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.799 [2024-11-20 14:34:35.678853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.799 [2024-11-20 14:34:35.679215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:34.799 [2024-11-20 14:34:35.679359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:34.799 [2024-11-20 14:34:35.679803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:34.799 [2024-11-20 14:34:35.687012] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:34.799 [2024-11-20 14:34:35.687177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:17:34.799 id_bdev 0x617000007e80 00:17:34.799 [2024-11-20 14:34:35.687742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.799 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.799 [ 00:17:34.799 { 00:17:34.799 "name": "BaseBdev4", 00:17:34.799 "aliases": [ 00:17:34.799 "644673e7-0fed-46c4-9eb1-a1ba9c7c4a9b" 00:17:34.799 ], 00:17:34.799 "product_name": "Malloc disk", 00:17:34.799 "block_size": 512, 00:17:34.799 "num_blocks": 65536, 00:17:34.799 "uuid": "644673e7-0fed-46c4-9eb1-a1ba9c7c4a9b", 00:17:34.799 "assigned_rate_limits": { 00:17:34.799 "rw_ios_per_sec": 0, 00:17:34.799 "rw_mbytes_per_sec": 0, 00:17:34.799 "r_mbytes_per_sec": 0, 00:17:34.799 "w_mbytes_per_sec": 0 00:17:34.799 }, 00:17:34.799 "claimed": true, 00:17:34.799 "claim_type": "exclusive_write", 00:17:34.799 "zoned": false, 00:17:34.799 "supported_io_types": { 00:17:34.799 "read": true, 00:17:34.799 "write": true, 00:17:34.799 "unmap": true, 00:17:34.799 "flush": true, 00:17:34.799 "reset": true, 00:17:34.799 "nvme_admin": false, 00:17:34.799 "nvme_io": false, 00:17:34.799 "nvme_io_md": false, 00:17:34.799 "write_zeroes": true, 00:17:34.799 "zcopy": true, 00:17:34.799 "get_zone_info": false, 00:17:34.799 "zone_management": false, 00:17:34.799 "zone_append": false, 00:17:34.799 "compare": false, 00:17:34.799 "compare_and_write": false, 00:17:34.799 "abort": true, 00:17:34.799 "seek_hole": false, 00:17:34.799 "seek_data": false, 00:17:34.799 "copy": true, 00:17:34.799 "nvme_iov_md": false 00:17:34.799 }, 00:17:34.799 "memory_domains": [ 00:17:34.799 { 00:17:34.799 "dma_device_id": "system", 00:17:34.799 "dma_device_type": 1 00:17:34.799 }, 00:17:34.799 { 00:17:34.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.799 "dma_device_type": 2 00:17:34.799 } 00:17:34.799 ], 00:17:34.800 "driver_specific": {} 00:17:34.800 } 00:17:34.800 ] 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.800 "name": "Existed_Raid", 00:17:34.800 "uuid": "689ef0a1-c9a3-4f0e-9165-b8a650e691bb", 00:17:34.800 "strip_size_kb": 64, 00:17:34.800 "state": "online", 00:17:34.800 "raid_level": "raid5f", 00:17:34.800 "superblock": false, 00:17:34.800 "num_base_bdevs": 4, 00:17:34.800 "num_base_bdevs_discovered": 4, 00:17:34.800 "num_base_bdevs_operational": 4, 00:17:34.800 "base_bdevs_list": [ 00:17:34.800 { 00:17:34.800 "name": "BaseBdev1", 00:17:34.800 "uuid": "a839a1ba-4535-485e-99de-ecf6865e5220", 00:17:34.800 "is_configured": true, 00:17:34.800 "data_offset": 0, 00:17:34.800 "data_size": 65536 00:17:34.800 }, 00:17:34.800 { 00:17:34.800 "name": "BaseBdev2", 00:17:34.800 "uuid": "7ebe95bd-9386-4f29-b224-c2291219e047", 00:17:34.800 "is_configured": true, 00:17:34.800 "data_offset": 0, 00:17:34.800 "data_size": 65536 00:17:34.800 }, 00:17:34.800 { 00:17:34.800 "name": "BaseBdev3", 00:17:34.800 "uuid": "08895449-5925-4b07-8d83-5f57307e2e5d", 00:17:34.800 "is_configured": true, 00:17:34.800 "data_offset": 0, 00:17:34.800 "data_size": 65536 00:17:34.800 }, 00:17:34.800 { 00:17:34.800 "name": "BaseBdev4", 00:17:34.800 "uuid": "644673e7-0fed-46c4-9eb1-a1ba9c7c4a9b", 00:17:34.800 "is_configured": true, 00:17:34.800 "data_offset": 0, 00:17:34.800 "data_size": 65536 00:17:34.800 } 00:17:34.800 ] 00:17:34.800 }' 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.800 14:34:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.367 [2024-11-20 14:34:36.212157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:35.367 "name": "Existed_Raid", 00:17:35.367 "aliases": [ 00:17:35.367 "689ef0a1-c9a3-4f0e-9165-b8a650e691bb" 00:17:35.367 ], 00:17:35.367 "product_name": "Raid Volume", 00:17:35.367 "block_size": 512, 00:17:35.367 "num_blocks": 196608, 00:17:35.367 "uuid": "689ef0a1-c9a3-4f0e-9165-b8a650e691bb", 00:17:35.367 "assigned_rate_limits": { 00:17:35.367 "rw_ios_per_sec": 0, 00:17:35.367 "rw_mbytes_per_sec": 0, 00:17:35.367 "r_mbytes_per_sec": 0, 00:17:35.367 "w_mbytes_per_sec": 0 00:17:35.367 }, 00:17:35.367 "claimed": false, 00:17:35.367 "zoned": false, 00:17:35.367 "supported_io_types": { 00:17:35.367 "read": true, 00:17:35.367 "write": true, 00:17:35.367 "unmap": false, 00:17:35.367 "flush": false, 00:17:35.367 "reset": true, 00:17:35.367 "nvme_admin": false, 00:17:35.367 "nvme_io": false, 00:17:35.367 "nvme_io_md": false, 00:17:35.367 "write_zeroes": true, 00:17:35.367 "zcopy": false, 00:17:35.367 "get_zone_info": false, 00:17:35.367 "zone_management": false, 00:17:35.367 "zone_append": false, 00:17:35.367 "compare": false, 00:17:35.367 "compare_and_write": false, 00:17:35.367 "abort": false, 00:17:35.367 "seek_hole": false, 00:17:35.367 "seek_data": false, 00:17:35.367 "copy": false, 00:17:35.367 "nvme_iov_md": false 00:17:35.367 }, 00:17:35.367 "driver_specific": { 00:17:35.367 "raid": { 00:17:35.367 "uuid": "689ef0a1-c9a3-4f0e-9165-b8a650e691bb", 00:17:35.367 "strip_size_kb": 64, 00:17:35.367 "state": "online", 00:17:35.367 "raid_level": "raid5f", 00:17:35.367 "superblock": false, 00:17:35.367 "num_base_bdevs": 4, 00:17:35.367 "num_base_bdevs_discovered": 4, 00:17:35.367 "num_base_bdevs_operational": 4, 00:17:35.367 "base_bdevs_list": [ 00:17:35.367 { 00:17:35.367 "name": "BaseBdev1", 00:17:35.367 "uuid": "a839a1ba-4535-485e-99de-ecf6865e5220", 00:17:35.367 "is_configured": true, 00:17:35.367 "data_offset": 0, 00:17:35.367 "data_size": 65536 00:17:35.367 }, 00:17:35.367 { 00:17:35.367 "name": "BaseBdev2", 00:17:35.367 "uuid": "7ebe95bd-9386-4f29-b224-c2291219e047", 00:17:35.367 "is_configured": true, 00:17:35.367 "data_offset": 0, 00:17:35.367 "data_size": 65536 00:17:35.367 }, 00:17:35.367 { 00:17:35.367 "name": "BaseBdev3", 00:17:35.367 "uuid": "08895449-5925-4b07-8d83-5f57307e2e5d", 00:17:35.367 "is_configured": true, 00:17:35.367 "data_offset": 0, 00:17:35.367 "data_size": 65536 00:17:35.367 }, 00:17:35.367 { 00:17:35.367 "name": "BaseBdev4", 00:17:35.367 "uuid": "644673e7-0fed-46c4-9eb1-a1ba9c7c4a9b", 00:17:35.367 "is_configured": true, 00:17:35.367 "data_offset": 0, 00:17:35.367 "data_size": 65536 00:17:35.367 } 00:17:35.367 ] 00:17:35.367 } 00:17:35.367 } 00:17:35.367 }' 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:35.367 BaseBdev2 00:17:35.367 BaseBdev3 00:17:35.367 BaseBdev4' 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.367 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.368 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.626 [2024-11-20 14:34:36.560011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.626 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.929 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.929 "name": "Existed_Raid", 00:17:35.929 "uuid": "689ef0a1-c9a3-4f0e-9165-b8a650e691bb", 00:17:35.929 "strip_size_kb": 64, 00:17:35.929 "state": "online", 00:17:35.929 "raid_level": "raid5f", 00:17:35.929 "superblock": false, 00:17:35.929 "num_base_bdevs": 4, 00:17:35.929 "num_base_bdevs_discovered": 3, 00:17:35.929 "num_base_bdevs_operational": 3, 00:17:35.929 "base_bdevs_list": [ 00:17:35.929 { 00:17:35.929 "name": null, 00:17:35.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.929 "is_configured": false, 00:17:35.929 "data_offset": 0, 00:17:35.929 "data_size": 65536 00:17:35.929 }, 00:17:35.929 { 00:17:35.929 "name": "BaseBdev2", 00:17:35.929 "uuid": "7ebe95bd-9386-4f29-b224-c2291219e047", 00:17:35.929 "is_configured": true, 00:17:35.929 "data_offset": 0, 00:17:35.929 "data_size": 65536 00:17:35.929 }, 00:17:35.929 { 00:17:35.929 "name": "BaseBdev3", 00:17:35.929 "uuid": "08895449-5925-4b07-8d83-5f57307e2e5d", 00:17:35.929 "is_configured": true, 00:17:35.929 "data_offset": 0, 00:17:35.929 "data_size": 65536 00:17:35.929 }, 00:17:35.929 { 00:17:35.929 "name": "BaseBdev4", 00:17:35.929 "uuid": "644673e7-0fed-46c4-9eb1-a1ba9c7c4a9b", 00:17:35.929 "is_configured": true, 00:17:35.929 "data_offset": 0, 00:17:35.929 "data_size": 65536 00:17:35.929 } 00:17:35.929 ] 00:17:35.929 }' 00:17:35.929 14:34:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.929 14:34:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.187 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.187 [2024-11-20 14:34:37.236939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:36.187 [2024-11-20 14:34:37.237110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.445 [2024-11-20 14:34:37.326092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.445 [2024-11-20 14:34:37.386138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.445 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.704 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.704 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.705 [2024-11-20 14:34:37.535924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:36.705 [2024-11-20 14:34:37.536018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.705 BaseBdev2 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.705 [ 00:17:36.705 { 00:17:36.705 "name": "BaseBdev2", 00:17:36.705 "aliases": [ 00:17:36.705 "075a0cab-c362-4be9-a00c-252fc4a4c21e" 00:17:36.705 ], 00:17:36.705 "product_name": "Malloc disk", 00:17:36.705 "block_size": 512, 00:17:36.705 "num_blocks": 65536, 00:17:36.705 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:36.705 "assigned_rate_limits": { 00:17:36.705 "rw_ios_per_sec": 0, 00:17:36.705 "rw_mbytes_per_sec": 0, 00:17:36.705 "r_mbytes_per_sec": 0, 00:17:36.705 "w_mbytes_per_sec": 0 00:17:36.705 }, 00:17:36.705 "claimed": false, 00:17:36.705 "zoned": false, 00:17:36.705 "supported_io_types": { 00:17:36.705 "read": true, 00:17:36.705 "write": true, 00:17:36.705 "unmap": true, 00:17:36.705 "flush": true, 00:17:36.705 "reset": true, 00:17:36.705 "nvme_admin": false, 00:17:36.705 "nvme_io": false, 00:17:36.705 "nvme_io_md": false, 00:17:36.705 "write_zeroes": true, 00:17:36.705 "zcopy": true, 00:17:36.705 "get_zone_info": false, 00:17:36.705 "zone_management": false, 00:17:36.705 "zone_append": false, 00:17:36.705 "compare": false, 00:17:36.705 "compare_and_write": false, 00:17:36.705 "abort": true, 00:17:36.705 "seek_hole": false, 00:17:36.705 "seek_data": false, 00:17:36.705 "copy": true, 00:17:36.705 "nvme_iov_md": false 00:17:36.705 }, 00:17:36.705 "memory_domains": [ 00:17:36.705 { 00:17:36.705 "dma_device_id": "system", 00:17:36.705 "dma_device_type": 1 00:17:36.705 }, 00:17:36.705 { 00:17:36.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.705 "dma_device_type": 2 00:17:36.705 } 00:17:36.705 ], 00:17:36.705 "driver_specific": {} 00:17:36.705 } 00:17:36.705 ] 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.705 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.965 BaseBdev3 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.965 [ 00:17:36.965 { 00:17:36.965 "name": "BaseBdev3", 00:17:36.965 "aliases": [ 00:17:36.965 "64aad95c-2619-4061-bd19-accb99faa6d4" 00:17:36.965 ], 00:17:36.965 "product_name": "Malloc disk", 00:17:36.965 "block_size": 512, 00:17:36.965 "num_blocks": 65536, 00:17:36.965 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:36.965 "assigned_rate_limits": { 00:17:36.965 "rw_ios_per_sec": 0, 00:17:36.965 "rw_mbytes_per_sec": 0, 00:17:36.965 "r_mbytes_per_sec": 0, 00:17:36.965 "w_mbytes_per_sec": 0 00:17:36.965 }, 00:17:36.965 "claimed": false, 00:17:36.965 "zoned": false, 00:17:36.965 "supported_io_types": { 00:17:36.965 "read": true, 00:17:36.965 "write": true, 00:17:36.965 "unmap": true, 00:17:36.965 "flush": true, 00:17:36.965 "reset": true, 00:17:36.965 "nvme_admin": false, 00:17:36.965 "nvme_io": false, 00:17:36.965 "nvme_io_md": false, 00:17:36.965 "write_zeroes": true, 00:17:36.965 "zcopy": true, 00:17:36.965 "get_zone_info": false, 00:17:36.965 "zone_management": false, 00:17:36.965 "zone_append": false, 00:17:36.965 "compare": false, 00:17:36.965 "compare_and_write": false, 00:17:36.965 "abort": true, 00:17:36.965 "seek_hole": false, 00:17:36.965 "seek_data": false, 00:17:36.965 "copy": true, 00:17:36.965 "nvme_iov_md": false 00:17:36.965 }, 00:17:36.965 "memory_domains": [ 00:17:36.965 { 00:17:36.965 "dma_device_id": "system", 00:17:36.965 "dma_device_type": 1 00:17:36.965 }, 00:17:36.965 { 00:17:36.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.965 "dma_device_type": 2 00:17:36.965 } 00:17:36.965 ], 00:17:36.965 "driver_specific": {} 00:17:36.965 } 00:17:36.965 ] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.965 BaseBdev4 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.965 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.965 [ 00:17:36.965 { 00:17:36.965 "name": "BaseBdev4", 00:17:36.965 "aliases": [ 00:17:36.965 "b7d94d3d-7395-4559-8fef-19faebd8c08f" 00:17:36.965 ], 00:17:36.965 "product_name": "Malloc disk", 00:17:36.965 "block_size": 512, 00:17:36.965 "num_blocks": 65536, 00:17:36.965 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:36.965 "assigned_rate_limits": { 00:17:36.965 "rw_ios_per_sec": 0, 00:17:36.965 "rw_mbytes_per_sec": 0, 00:17:36.965 "r_mbytes_per_sec": 0, 00:17:36.965 "w_mbytes_per_sec": 0 00:17:36.965 }, 00:17:36.965 "claimed": false, 00:17:36.965 "zoned": false, 00:17:36.965 "supported_io_types": { 00:17:36.965 "read": true, 00:17:36.965 "write": true, 00:17:36.965 "unmap": true, 00:17:36.965 "flush": true, 00:17:36.965 "reset": true, 00:17:36.965 "nvme_admin": false, 00:17:36.965 "nvme_io": false, 00:17:36.965 "nvme_io_md": false, 00:17:36.965 "write_zeroes": true, 00:17:36.965 "zcopy": true, 00:17:36.965 "get_zone_info": false, 00:17:36.965 "zone_management": false, 00:17:36.965 "zone_append": false, 00:17:36.965 "compare": false, 00:17:36.965 "compare_and_write": false, 00:17:36.965 "abort": true, 00:17:36.965 "seek_hole": false, 00:17:36.965 "seek_data": false, 00:17:36.965 "copy": true, 00:17:36.965 "nvme_iov_md": false 00:17:36.965 }, 00:17:36.965 "memory_domains": [ 00:17:36.965 { 00:17:36.965 "dma_device_id": "system", 00:17:36.965 "dma_device_type": 1 00:17:36.965 }, 00:17:36.965 { 00:17:36.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.965 "dma_device_type": 2 00:17:36.965 } 00:17:36.965 ], 00:17:36.965 "driver_specific": {} 00:17:36.965 } 00:17:36.966 ] 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.966 [2024-11-20 14:34:37.916830] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:36.966 [2024-11-20 14:34:37.916909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:36.966 [2024-11-20 14:34:37.916946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.966 [2024-11-20 14:34:37.919464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.966 [2024-11-20 14:34:37.919548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.966 "name": "Existed_Raid", 00:17:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.966 "strip_size_kb": 64, 00:17:36.966 "state": "configuring", 00:17:36.966 "raid_level": "raid5f", 00:17:36.966 "superblock": false, 00:17:36.966 "num_base_bdevs": 4, 00:17:36.966 "num_base_bdevs_discovered": 3, 00:17:36.966 "num_base_bdevs_operational": 4, 00:17:36.966 "base_bdevs_list": [ 00:17:36.966 { 00:17:36.966 "name": "BaseBdev1", 00:17:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.966 "is_configured": false, 00:17:36.966 "data_offset": 0, 00:17:36.966 "data_size": 0 00:17:36.966 }, 00:17:36.966 { 00:17:36.966 "name": "BaseBdev2", 00:17:36.966 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:36.966 "is_configured": true, 00:17:36.966 "data_offset": 0, 00:17:36.966 "data_size": 65536 00:17:36.966 }, 00:17:36.966 { 00:17:36.966 "name": "BaseBdev3", 00:17:36.966 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:36.966 "is_configured": true, 00:17:36.966 "data_offset": 0, 00:17:36.966 "data_size": 65536 00:17:36.966 }, 00:17:36.966 { 00:17:36.966 "name": "BaseBdev4", 00:17:36.966 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:36.966 "is_configured": true, 00:17:36.966 "data_offset": 0, 00:17:36.966 "data_size": 65536 00:17:36.966 } 00:17:36.966 ] 00:17:36.966 }' 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.966 14:34:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.533 [2024-11-20 14:34:38.441174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.533 "name": "Existed_Raid", 00:17:37.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.533 "strip_size_kb": 64, 00:17:37.533 "state": "configuring", 00:17:37.533 "raid_level": "raid5f", 00:17:37.533 "superblock": false, 00:17:37.533 "num_base_bdevs": 4, 00:17:37.533 "num_base_bdevs_discovered": 2, 00:17:37.533 "num_base_bdevs_operational": 4, 00:17:37.533 "base_bdevs_list": [ 00:17:37.533 { 00:17:37.533 "name": "BaseBdev1", 00:17:37.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.533 "is_configured": false, 00:17:37.533 "data_offset": 0, 00:17:37.533 "data_size": 0 00:17:37.533 }, 00:17:37.533 { 00:17:37.533 "name": null, 00:17:37.533 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:37.533 "is_configured": false, 00:17:37.533 "data_offset": 0, 00:17:37.533 "data_size": 65536 00:17:37.533 }, 00:17:37.533 { 00:17:37.533 "name": "BaseBdev3", 00:17:37.533 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:37.533 "is_configured": true, 00:17:37.533 "data_offset": 0, 00:17:37.533 "data_size": 65536 00:17:37.533 }, 00:17:37.533 { 00:17:37.533 "name": "BaseBdev4", 00:17:37.533 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:37.533 "is_configured": true, 00:17:37.533 "data_offset": 0, 00:17:37.533 "data_size": 65536 00:17:37.533 } 00:17:37.533 ] 00:17:37.533 }' 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.533 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.099 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.099 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.099 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.099 14:34:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:38.099 14:34:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.099 [2024-11-20 14:34:39.054539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.099 BaseBdev1 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.099 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.099 [ 00:17:38.099 { 00:17:38.099 "name": "BaseBdev1", 00:17:38.099 "aliases": [ 00:17:38.099 "9f1a7d9f-10f3-4432-bf06-be3681d6454c" 00:17:38.100 ], 00:17:38.100 "product_name": "Malloc disk", 00:17:38.100 "block_size": 512, 00:17:38.100 "num_blocks": 65536, 00:17:38.100 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:38.100 "assigned_rate_limits": { 00:17:38.100 "rw_ios_per_sec": 0, 00:17:38.100 "rw_mbytes_per_sec": 0, 00:17:38.100 "r_mbytes_per_sec": 0, 00:17:38.100 "w_mbytes_per_sec": 0 00:17:38.100 }, 00:17:38.100 "claimed": true, 00:17:38.100 "claim_type": "exclusive_write", 00:17:38.100 "zoned": false, 00:17:38.100 "supported_io_types": { 00:17:38.100 "read": true, 00:17:38.100 "write": true, 00:17:38.100 "unmap": true, 00:17:38.100 "flush": true, 00:17:38.100 "reset": true, 00:17:38.100 "nvme_admin": false, 00:17:38.100 "nvme_io": false, 00:17:38.100 "nvme_io_md": false, 00:17:38.100 "write_zeroes": true, 00:17:38.100 "zcopy": true, 00:17:38.100 "get_zone_info": false, 00:17:38.100 "zone_management": false, 00:17:38.100 "zone_append": false, 00:17:38.100 "compare": false, 00:17:38.100 "compare_and_write": false, 00:17:38.100 "abort": true, 00:17:38.100 "seek_hole": false, 00:17:38.100 "seek_data": false, 00:17:38.100 "copy": true, 00:17:38.100 "nvme_iov_md": false 00:17:38.100 }, 00:17:38.100 "memory_domains": [ 00:17:38.100 { 00:17:38.100 "dma_device_id": "system", 00:17:38.100 "dma_device_type": 1 00:17:38.100 }, 00:17:38.100 { 00:17:38.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.100 "dma_device_type": 2 00:17:38.100 } 00:17:38.100 ], 00:17:38.100 "driver_specific": {} 00:17:38.100 } 00:17:38.100 ] 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.100 "name": "Existed_Raid", 00:17:38.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.100 "strip_size_kb": 64, 00:17:38.100 "state": "configuring", 00:17:38.100 "raid_level": "raid5f", 00:17:38.100 "superblock": false, 00:17:38.100 "num_base_bdevs": 4, 00:17:38.100 "num_base_bdevs_discovered": 3, 00:17:38.100 "num_base_bdevs_operational": 4, 00:17:38.100 "base_bdevs_list": [ 00:17:38.100 { 00:17:38.100 "name": "BaseBdev1", 00:17:38.100 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:38.100 "is_configured": true, 00:17:38.100 "data_offset": 0, 00:17:38.100 "data_size": 65536 00:17:38.100 }, 00:17:38.100 { 00:17:38.100 "name": null, 00:17:38.100 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:38.100 "is_configured": false, 00:17:38.100 "data_offset": 0, 00:17:38.100 "data_size": 65536 00:17:38.100 }, 00:17:38.100 { 00:17:38.100 "name": "BaseBdev3", 00:17:38.100 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:38.100 "is_configured": true, 00:17:38.100 "data_offset": 0, 00:17:38.100 "data_size": 65536 00:17:38.100 }, 00:17:38.100 { 00:17:38.100 "name": "BaseBdev4", 00:17:38.100 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:38.100 "is_configured": true, 00:17:38.100 "data_offset": 0, 00:17:38.100 "data_size": 65536 00:17:38.100 } 00:17:38.100 ] 00:17:38.100 }' 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.100 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.665 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:38.665 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.665 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.665 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.665 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.666 [2024-11-20 14:34:39.694890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.666 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.924 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.924 "name": "Existed_Raid", 00:17:38.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.924 "strip_size_kb": 64, 00:17:38.924 "state": "configuring", 00:17:38.924 "raid_level": "raid5f", 00:17:38.924 "superblock": false, 00:17:38.924 "num_base_bdevs": 4, 00:17:38.924 "num_base_bdevs_discovered": 2, 00:17:38.924 "num_base_bdevs_operational": 4, 00:17:38.924 "base_bdevs_list": [ 00:17:38.924 { 00:17:38.924 "name": "BaseBdev1", 00:17:38.924 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:38.924 "is_configured": true, 00:17:38.924 "data_offset": 0, 00:17:38.924 "data_size": 65536 00:17:38.924 }, 00:17:38.924 { 00:17:38.924 "name": null, 00:17:38.924 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:38.924 "is_configured": false, 00:17:38.924 "data_offset": 0, 00:17:38.924 "data_size": 65536 00:17:38.924 }, 00:17:38.924 { 00:17:38.924 "name": null, 00:17:38.924 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:38.924 "is_configured": false, 00:17:38.924 "data_offset": 0, 00:17:38.924 "data_size": 65536 00:17:38.924 }, 00:17:38.924 { 00:17:38.924 "name": "BaseBdev4", 00:17:38.924 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:38.924 "is_configured": true, 00:17:38.924 "data_offset": 0, 00:17:38.924 "data_size": 65536 00:17:38.924 } 00:17:38.924 ] 00:17:38.924 }' 00:17:38.924 14:34:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.924 14:34:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.182 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.182 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.182 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.182 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:39.182 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.441 [2024-11-20 14:34:40.263052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.441 "name": "Existed_Raid", 00:17:39.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.441 "strip_size_kb": 64, 00:17:39.441 "state": "configuring", 00:17:39.441 "raid_level": "raid5f", 00:17:39.441 "superblock": false, 00:17:39.441 "num_base_bdevs": 4, 00:17:39.441 "num_base_bdevs_discovered": 3, 00:17:39.441 "num_base_bdevs_operational": 4, 00:17:39.441 "base_bdevs_list": [ 00:17:39.441 { 00:17:39.441 "name": "BaseBdev1", 00:17:39.441 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:39.441 "is_configured": true, 00:17:39.441 "data_offset": 0, 00:17:39.441 "data_size": 65536 00:17:39.441 }, 00:17:39.441 { 00:17:39.441 "name": null, 00:17:39.441 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:39.441 "is_configured": false, 00:17:39.441 "data_offset": 0, 00:17:39.441 "data_size": 65536 00:17:39.441 }, 00:17:39.441 { 00:17:39.441 "name": "BaseBdev3", 00:17:39.441 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:39.441 "is_configured": true, 00:17:39.441 "data_offset": 0, 00:17:39.441 "data_size": 65536 00:17:39.441 }, 00:17:39.441 { 00:17:39.441 "name": "BaseBdev4", 00:17:39.441 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:39.441 "is_configured": true, 00:17:39.441 "data_offset": 0, 00:17:39.441 "data_size": 65536 00:17:39.441 } 00:17:39.441 ] 00:17:39.441 }' 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.441 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.008 [2024-11-20 14:34:40.815267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.008 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.008 "name": "Existed_Raid", 00:17:40.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.008 "strip_size_kb": 64, 00:17:40.008 "state": "configuring", 00:17:40.008 "raid_level": "raid5f", 00:17:40.008 "superblock": false, 00:17:40.008 "num_base_bdevs": 4, 00:17:40.008 "num_base_bdevs_discovered": 2, 00:17:40.008 "num_base_bdevs_operational": 4, 00:17:40.008 "base_bdevs_list": [ 00:17:40.008 { 00:17:40.008 "name": null, 00:17:40.008 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:40.008 "is_configured": false, 00:17:40.008 "data_offset": 0, 00:17:40.008 "data_size": 65536 00:17:40.009 }, 00:17:40.009 { 00:17:40.009 "name": null, 00:17:40.009 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:40.009 "is_configured": false, 00:17:40.009 "data_offset": 0, 00:17:40.009 "data_size": 65536 00:17:40.009 }, 00:17:40.009 { 00:17:40.009 "name": "BaseBdev3", 00:17:40.009 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:40.009 "is_configured": true, 00:17:40.009 "data_offset": 0, 00:17:40.009 "data_size": 65536 00:17:40.009 }, 00:17:40.009 { 00:17:40.009 "name": "BaseBdev4", 00:17:40.009 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:40.009 "is_configured": true, 00:17:40.009 "data_offset": 0, 00:17:40.009 "data_size": 65536 00:17:40.009 } 00:17:40.009 ] 00:17:40.009 }' 00:17:40.009 14:34:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.009 14:34:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 [2024-11-20 14:34:41.469508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.575 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.575 "name": "Existed_Raid", 00:17:40.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.575 "strip_size_kb": 64, 00:17:40.575 "state": "configuring", 00:17:40.575 "raid_level": "raid5f", 00:17:40.575 "superblock": false, 00:17:40.575 "num_base_bdevs": 4, 00:17:40.575 "num_base_bdevs_discovered": 3, 00:17:40.575 "num_base_bdevs_operational": 4, 00:17:40.575 "base_bdevs_list": [ 00:17:40.575 { 00:17:40.575 "name": null, 00:17:40.575 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:40.575 "is_configured": false, 00:17:40.575 "data_offset": 0, 00:17:40.575 "data_size": 65536 00:17:40.575 }, 00:17:40.575 { 00:17:40.576 "name": "BaseBdev2", 00:17:40.576 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:40.576 "is_configured": true, 00:17:40.576 "data_offset": 0, 00:17:40.576 "data_size": 65536 00:17:40.576 }, 00:17:40.576 { 00:17:40.576 "name": "BaseBdev3", 00:17:40.576 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:40.576 "is_configured": true, 00:17:40.576 "data_offset": 0, 00:17:40.576 "data_size": 65536 00:17:40.576 }, 00:17:40.576 { 00:17:40.576 "name": "BaseBdev4", 00:17:40.576 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:40.576 "is_configured": true, 00:17:40.576 "data_offset": 0, 00:17:40.576 "data_size": 65536 00:17:40.576 } 00:17:40.576 ] 00:17:40.576 }' 00:17:40.576 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.576 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.143 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:41.143 14:34:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.143 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.143 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.143 14:34:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f1a7d9f-10f3-4432-bf06-be3681d6454c 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.143 [2024-11-20 14:34:42.120485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:41.143 [2024-11-20 14:34:42.120571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:41.143 [2024-11-20 14:34:42.120584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:41.143 [2024-11-20 14:34:42.120934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:41.143 [2024-11-20 14:34:42.127574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:41.143 [2024-11-20 14:34:42.127735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:41.143 [2024-11-20 14:34:42.128076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.143 NewBaseBdev 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.143 [ 00:17:41.143 { 00:17:41.143 "name": "NewBaseBdev", 00:17:41.143 "aliases": [ 00:17:41.143 "9f1a7d9f-10f3-4432-bf06-be3681d6454c" 00:17:41.143 ], 00:17:41.143 "product_name": "Malloc disk", 00:17:41.143 "block_size": 512, 00:17:41.143 "num_blocks": 65536, 00:17:41.143 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:41.143 "assigned_rate_limits": { 00:17:41.143 "rw_ios_per_sec": 0, 00:17:41.143 "rw_mbytes_per_sec": 0, 00:17:41.143 "r_mbytes_per_sec": 0, 00:17:41.143 "w_mbytes_per_sec": 0 00:17:41.143 }, 00:17:41.143 "claimed": true, 00:17:41.143 "claim_type": "exclusive_write", 00:17:41.143 "zoned": false, 00:17:41.143 "supported_io_types": { 00:17:41.143 "read": true, 00:17:41.143 "write": true, 00:17:41.143 "unmap": true, 00:17:41.143 "flush": true, 00:17:41.143 "reset": true, 00:17:41.143 "nvme_admin": false, 00:17:41.143 "nvme_io": false, 00:17:41.143 "nvme_io_md": false, 00:17:41.143 "write_zeroes": true, 00:17:41.143 "zcopy": true, 00:17:41.143 "get_zone_info": false, 00:17:41.143 "zone_management": false, 00:17:41.143 "zone_append": false, 00:17:41.143 "compare": false, 00:17:41.143 "compare_and_write": false, 00:17:41.143 "abort": true, 00:17:41.143 "seek_hole": false, 00:17:41.143 "seek_data": false, 00:17:41.143 "copy": true, 00:17:41.143 "nvme_iov_md": false 00:17:41.143 }, 00:17:41.143 "memory_domains": [ 00:17:41.143 { 00:17:41.143 "dma_device_id": "system", 00:17:41.143 "dma_device_type": 1 00:17:41.143 }, 00:17:41.143 { 00:17:41.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.143 "dma_device_type": 2 00:17:41.143 } 00:17:41.143 ], 00:17:41.143 "driver_specific": {} 00:17:41.143 } 00:17:41.143 ] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.143 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.402 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.402 "name": "Existed_Raid", 00:17:41.402 "uuid": "8b60239b-322d-4826-b32a-13fb5b240b92", 00:17:41.402 "strip_size_kb": 64, 00:17:41.402 "state": "online", 00:17:41.402 "raid_level": "raid5f", 00:17:41.402 "superblock": false, 00:17:41.402 "num_base_bdevs": 4, 00:17:41.402 "num_base_bdevs_discovered": 4, 00:17:41.402 "num_base_bdevs_operational": 4, 00:17:41.402 "base_bdevs_list": [ 00:17:41.402 { 00:17:41.402 "name": "NewBaseBdev", 00:17:41.402 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:41.402 "is_configured": true, 00:17:41.402 "data_offset": 0, 00:17:41.402 "data_size": 65536 00:17:41.402 }, 00:17:41.402 { 00:17:41.402 "name": "BaseBdev2", 00:17:41.402 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:41.402 "is_configured": true, 00:17:41.402 "data_offset": 0, 00:17:41.402 "data_size": 65536 00:17:41.402 }, 00:17:41.402 { 00:17:41.402 "name": "BaseBdev3", 00:17:41.402 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:41.402 "is_configured": true, 00:17:41.402 "data_offset": 0, 00:17:41.402 "data_size": 65536 00:17:41.402 }, 00:17:41.402 { 00:17:41.402 "name": "BaseBdev4", 00:17:41.402 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:41.402 "is_configured": true, 00:17:41.402 "data_offset": 0, 00:17:41.402 "data_size": 65536 00:17:41.402 } 00:17:41.402 ] 00:17:41.402 }' 00:17:41.402 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.402 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.970 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:41.970 [2024-11-20 14:34:42.727931] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:41.971 "name": "Existed_Raid", 00:17:41.971 "aliases": [ 00:17:41.971 "8b60239b-322d-4826-b32a-13fb5b240b92" 00:17:41.971 ], 00:17:41.971 "product_name": "Raid Volume", 00:17:41.971 "block_size": 512, 00:17:41.971 "num_blocks": 196608, 00:17:41.971 "uuid": "8b60239b-322d-4826-b32a-13fb5b240b92", 00:17:41.971 "assigned_rate_limits": { 00:17:41.971 "rw_ios_per_sec": 0, 00:17:41.971 "rw_mbytes_per_sec": 0, 00:17:41.971 "r_mbytes_per_sec": 0, 00:17:41.971 "w_mbytes_per_sec": 0 00:17:41.971 }, 00:17:41.971 "claimed": false, 00:17:41.971 "zoned": false, 00:17:41.971 "supported_io_types": { 00:17:41.971 "read": true, 00:17:41.971 "write": true, 00:17:41.971 "unmap": false, 00:17:41.971 "flush": false, 00:17:41.971 "reset": true, 00:17:41.971 "nvme_admin": false, 00:17:41.971 "nvme_io": false, 00:17:41.971 "nvme_io_md": false, 00:17:41.971 "write_zeroes": true, 00:17:41.971 "zcopy": false, 00:17:41.971 "get_zone_info": false, 00:17:41.971 "zone_management": false, 00:17:41.971 "zone_append": false, 00:17:41.971 "compare": false, 00:17:41.971 "compare_and_write": false, 00:17:41.971 "abort": false, 00:17:41.971 "seek_hole": false, 00:17:41.971 "seek_data": false, 00:17:41.971 "copy": false, 00:17:41.971 "nvme_iov_md": false 00:17:41.971 }, 00:17:41.971 "driver_specific": { 00:17:41.971 "raid": { 00:17:41.971 "uuid": "8b60239b-322d-4826-b32a-13fb5b240b92", 00:17:41.971 "strip_size_kb": 64, 00:17:41.971 "state": "online", 00:17:41.971 "raid_level": "raid5f", 00:17:41.971 "superblock": false, 00:17:41.971 "num_base_bdevs": 4, 00:17:41.971 "num_base_bdevs_discovered": 4, 00:17:41.971 "num_base_bdevs_operational": 4, 00:17:41.971 "base_bdevs_list": [ 00:17:41.971 { 00:17:41.971 "name": "NewBaseBdev", 00:17:41.971 "uuid": "9f1a7d9f-10f3-4432-bf06-be3681d6454c", 00:17:41.971 "is_configured": true, 00:17:41.971 "data_offset": 0, 00:17:41.971 "data_size": 65536 00:17:41.971 }, 00:17:41.971 { 00:17:41.971 "name": "BaseBdev2", 00:17:41.971 "uuid": "075a0cab-c362-4be9-a00c-252fc4a4c21e", 00:17:41.971 "is_configured": true, 00:17:41.971 "data_offset": 0, 00:17:41.971 "data_size": 65536 00:17:41.971 }, 00:17:41.971 { 00:17:41.971 "name": "BaseBdev3", 00:17:41.971 "uuid": "64aad95c-2619-4061-bd19-accb99faa6d4", 00:17:41.971 "is_configured": true, 00:17:41.971 "data_offset": 0, 00:17:41.971 "data_size": 65536 00:17:41.971 }, 00:17:41.971 { 00:17:41.971 "name": "BaseBdev4", 00:17:41.971 "uuid": "b7d94d3d-7395-4559-8fef-19faebd8c08f", 00:17:41.971 "is_configured": true, 00:17:41.971 "data_offset": 0, 00:17:41.971 "data_size": 65536 00:17:41.971 } 00:17:41.971 ] 00:17:41.971 } 00:17:41.971 } 00:17:41.971 }' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:41.971 BaseBdev2 00:17:41.971 BaseBdev3 00:17:41.971 BaseBdev4' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.971 14:34:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:41.971 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.231 [2024-11-20 14:34:43.107706] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.231 [2024-11-20 14:34:43.107747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.231 [2024-11-20 14:34:43.107845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.231 [2024-11-20 14:34:43.108246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.231 [2024-11-20 14:34:43.108297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83245 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83245 ']' 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83245 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83245 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.231 killing process with pid 83245 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83245' 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83245 00:17:42.231 [2024-11-20 14:34:43.145837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.231 14:34:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83245 00:17:42.491 [2024-11-20 14:34:43.514289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.939 ************************************ 00:17:43.939 END TEST raid5f_state_function_test 00:17:43.939 ************************************ 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:43.939 00:17:43.939 real 0m12.946s 00:17:43.939 user 0m21.219s 00:17:43.939 sys 0m1.955s 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.939 14:34:44 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:43.939 14:34:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:43.939 14:34:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.939 14:34:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.939 ************************************ 00:17:43.939 START TEST raid5f_state_function_test_sb 00:17:43.939 ************************************ 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:43.939 Process raid pid: 83928 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83928 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83928' 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83928 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83928 ']' 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.939 14:34:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.939 [2024-11-20 14:34:44.797942] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:17:43.939 [2024-11-20 14:34:44.798402] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.218 [2024-11-20 14:34:44.989991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.218 [2024-11-20 14:34:45.135362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.478 [2024-11-20 14:34:45.356648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.478 [2024-11-20 14:34:45.356741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.045 [2024-11-20 14:34:45.826412] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.045 [2024-11-20 14:34:45.826494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.045 [2024-11-20 14:34:45.826525] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.045 [2024-11-20 14:34:45.826543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.045 [2024-11-20 14:34:45.826553] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.045 [2024-11-20 14:34:45.826567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.045 [2024-11-20 14:34:45.826577] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:45.045 [2024-11-20 14:34:45.826591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.045 "name": "Existed_Raid", 00:17:45.045 "uuid": "f5437e7f-5f1d-4e42-91e6-a54d58125df3", 00:17:45.045 "strip_size_kb": 64, 00:17:45.045 "state": "configuring", 00:17:45.045 "raid_level": "raid5f", 00:17:45.045 "superblock": true, 00:17:45.045 "num_base_bdevs": 4, 00:17:45.045 "num_base_bdevs_discovered": 0, 00:17:45.045 "num_base_bdevs_operational": 4, 00:17:45.045 "base_bdevs_list": [ 00:17:45.045 { 00:17:45.045 "name": "BaseBdev1", 00:17:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.045 "is_configured": false, 00:17:45.045 "data_offset": 0, 00:17:45.045 "data_size": 0 00:17:45.045 }, 00:17:45.045 { 00:17:45.045 "name": "BaseBdev2", 00:17:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.045 "is_configured": false, 00:17:45.045 "data_offset": 0, 00:17:45.045 "data_size": 0 00:17:45.045 }, 00:17:45.045 { 00:17:45.045 "name": "BaseBdev3", 00:17:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.045 "is_configured": false, 00:17:45.045 "data_offset": 0, 00:17:45.045 "data_size": 0 00:17:45.045 }, 00:17:45.045 { 00:17:45.045 "name": "BaseBdev4", 00:17:45.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.045 "is_configured": false, 00:17:45.045 "data_offset": 0, 00:17:45.045 "data_size": 0 00:17:45.045 } 00:17:45.045 ] 00:17:45.045 }' 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.045 14:34:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.304 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.304 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 [2024-11-20 14:34:46.362535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.563 [2024-11-20 14:34:46.362583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 [2024-11-20 14:34:46.370528] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.563 [2024-11-20 14:34:46.370599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.563 [2024-11-20 14:34:46.370629] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.563 [2024-11-20 14:34:46.370816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.563 [2024-11-20 14:34:46.370872] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.563 [2024-11-20 14:34:46.370898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.563 [2024-11-20 14:34:46.370909] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:45.563 [2024-11-20 14:34:46.370924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 [2024-11-20 14:34:46.415825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.563 BaseBdev1 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.563 [ 00:17:45.563 { 00:17:45.563 "name": "BaseBdev1", 00:17:45.563 "aliases": [ 00:17:45.563 "41f856b6-39d1-454a-aaaf-18ee142c7554" 00:17:45.563 ], 00:17:45.563 "product_name": "Malloc disk", 00:17:45.563 "block_size": 512, 00:17:45.563 "num_blocks": 65536, 00:17:45.563 "uuid": "41f856b6-39d1-454a-aaaf-18ee142c7554", 00:17:45.563 "assigned_rate_limits": { 00:17:45.563 "rw_ios_per_sec": 0, 00:17:45.563 "rw_mbytes_per_sec": 0, 00:17:45.563 "r_mbytes_per_sec": 0, 00:17:45.563 "w_mbytes_per_sec": 0 00:17:45.563 }, 00:17:45.563 "claimed": true, 00:17:45.563 "claim_type": "exclusive_write", 00:17:45.563 "zoned": false, 00:17:45.563 "supported_io_types": { 00:17:45.563 "read": true, 00:17:45.563 "write": true, 00:17:45.563 "unmap": true, 00:17:45.563 "flush": true, 00:17:45.563 "reset": true, 00:17:45.563 "nvme_admin": false, 00:17:45.563 "nvme_io": false, 00:17:45.563 "nvme_io_md": false, 00:17:45.563 "write_zeroes": true, 00:17:45.563 "zcopy": true, 00:17:45.563 "get_zone_info": false, 00:17:45.563 "zone_management": false, 00:17:45.563 "zone_append": false, 00:17:45.563 "compare": false, 00:17:45.563 "compare_and_write": false, 00:17:45.563 "abort": true, 00:17:45.563 "seek_hole": false, 00:17:45.563 "seek_data": false, 00:17:45.563 "copy": true, 00:17:45.563 "nvme_iov_md": false 00:17:45.563 }, 00:17:45.563 "memory_domains": [ 00:17:45.563 { 00:17:45.563 "dma_device_id": "system", 00:17:45.563 "dma_device_type": 1 00:17:45.563 }, 00:17:45.563 { 00:17:45.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.563 "dma_device_type": 2 00:17:45.563 } 00:17:45.563 ], 00:17:45.563 "driver_specific": {} 00:17:45.563 } 00:17:45.563 ] 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.563 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.564 "name": "Existed_Raid", 00:17:45.564 "uuid": "d042a0c7-459e-44ca-b846-8fca6397167e", 00:17:45.564 "strip_size_kb": 64, 00:17:45.564 "state": "configuring", 00:17:45.564 "raid_level": "raid5f", 00:17:45.564 "superblock": true, 00:17:45.564 "num_base_bdevs": 4, 00:17:45.564 "num_base_bdevs_discovered": 1, 00:17:45.564 "num_base_bdevs_operational": 4, 00:17:45.564 "base_bdevs_list": [ 00:17:45.564 { 00:17:45.564 "name": "BaseBdev1", 00:17:45.564 "uuid": "41f856b6-39d1-454a-aaaf-18ee142c7554", 00:17:45.564 "is_configured": true, 00:17:45.564 "data_offset": 2048, 00:17:45.564 "data_size": 63488 00:17:45.564 }, 00:17:45.564 { 00:17:45.564 "name": "BaseBdev2", 00:17:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.564 "is_configured": false, 00:17:45.564 "data_offset": 0, 00:17:45.564 "data_size": 0 00:17:45.564 }, 00:17:45.564 { 00:17:45.564 "name": "BaseBdev3", 00:17:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.564 "is_configured": false, 00:17:45.564 "data_offset": 0, 00:17:45.564 "data_size": 0 00:17:45.564 }, 00:17:45.564 { 00:17:45.564 "name": "BaseBdev4", 00:17:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.564 "is_configured": false, 00:17:45.564 "data_offset": 0, 00:17:45.564 "data_size": 0 00:17:45.564 } 00:17:45.564 ] 00:17:45.564 }' 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.564 14:34:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.131 [2024-11-20 14:34:47.012093] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.131 [2024-11-20 14:34:47.012323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.131 [2024-11-20 14:34:47.020163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.131 [2024-11-20 14:34:47.022892] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.131 [2024-11-20 14:34:47.022975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.131 [2024-11-20 14:34:47.023100] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.131 [2024-11-20 14:34:47.023160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.131 [2024-11-20 14:34:47.023215] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:46.131 [2024-11-20 14:34:47.023358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.131 "name": "Existed_Raid", 00:17:46.131 "uuid": "58db5703-8743-4436-80bd-55a126ff2598", 00:17:46.131 "strip_size_kb": 64, 00:17:46.131 "state": "configuring", 00:17:46.131 "raid_level": "raid5f", 00:17:46.131 "superblock": true, 00:17:46.131 "num_base_bdevs": 4, 00:17:46.131 "num_base_bdevs_discovered": 1, 00:17:46.131 "num_base_bdevs_operational": 4, 00:17:46.131 "base_bdevs_list": [ 00:17:46.131 { 00:17:46.131 "name": "BaseBdev1", 00:17:46.131 "uuid": "41f856b6-39d1-454a-aaaf-18ee142c7554", 00:17:46.131 "is_configured": true, 00:17:46.131 "data_offset": 2048, 00:17:46.131 "data_size": 63488 00:17:46.131 }, 00:17:46.131 { 00:17:46.131 "name": "BaseBdev2", 00:17:46.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.131 "is_configured": false, 00:17:46.131 "data_offset": 0, 00:17:46.131 "data_size": 0 00:17:46.131 }, 00:17:46.131 { 00:17:46.131 "name": "BaseBdev3", 00:17:46.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.131 "is_configured": false, 00:17:46.131 "data_offset": 0, 00:17:46.131 "data_size": 0 00:17:46.131 }, 00:17:46.131 { 00:17:46.131 "name": "BaseBdev4", 00:17:46.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.131 "is_configured": false, 00:17:46.131 "data_offset": 0, 00:17:46.131 "data_size": 0 00:17:46.131 } 00:17:46.131 ] 00:17:46.131 }' 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.131 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 [2024-11-20 14:34:47.559249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.700 BaseBdev2 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 [ 00:17:46.700 { 00:17:46.700 "name": "BaseBdev2", 00:17:46.700 "aliases": [ 00:17:46.700 "31450079-49a8-41b5-b729-69dc67f9e76b" 00:17:46.700 ], 00:17:46.700 "product_name": "Malloc disk", 00:17:46.700 "block_size": 512, 00:17:46.700 "num_blocks": 65536, 00:17:46.700 "uuid": "31450079-49a8-41b5-b729-69dc67f9e76b", 00:17:46.700 "assigned_rate_limits": { 00:17:46.700 "rw_ios_per_sec": 0, 00:17:46.700 "rw_mbytes_per_sec": 0, 00:17:46.700 "r_mbytes_per_sec": 0, 00:17:46.700 "w_mbytes_per_sec": 0 00:17:46.700 }, 00:17:46.700 "claimed": true, 00:17:46.700 "claim_type": "exclusive_write", 00:17:46.700 "zoned": false, 00:17:46.700 "supported_io_types": { 00:17:46.700 "read": true, 00:17:46.700 "write": true, 00:17:46.700 "unmap": true, 00:17:46.700 "flush": true, 00:17:46.700 "reset": true, 00:17:46.700 "nvme_admin": false, 00:17:46.700 "nvme_io": false, 00:17:46.700 "nvme_io_md": false, 00:17:46.700 "write_zeroes": true, 00:17:46.700 "zcopy": true, 00:17:46.700 "get_zone_info": false, 00:17:46.700 "zone_management": false, 00:17:46.700 "zone_append": false, 00:17:46.700 "compare": false, 00:17:46.700 "compare_and_write": false, 00:17:46.700 "abort": true, 00:17:46.700 "seek_hole": false, 00:17:46.700 "seek_data": false, 00:17:46.700 "copy": true, 00:17:46.700 "nvme_iov_md": false 00:17:46.700 }, 00:17:46.700 "memory_domains": [ 00:17:46.700 { 00:17:46.700 "dma_device_id": "system", 00:17:46.700 "dma_device_type": 1 00:17:46.700 }, 00:17:46.700 { 00:17:46.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.700 "dma_device_type": 2 00:17:46.700 } 00:17:46.700 ], 00:17:46.700 "driver_specific": {} 00:17:46.700 } 00:17:46.700 ] 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.700 "name": "Existed_Raid", 00:17:46.700 "uuid": "58db5703-8743-4436-80bd-55a126ff2598", 00:17:46.700 "strip_size_kb": 64, 00:17:46.700 "state": "configuring", 00:17:46.700 "raid_level": "raid5f", 00:17:46.700 "superblock": true, 00:17:46.700 "num_base_bdevs": 4, 00:17:46.700 "num_base_bdevs_discovered": 2, 00:17:46.700 "num_base_bdevs_operational": 4, 00:17:46.700 "base_bdevs_list": [ 00:17:46.700 { 00:17:46.700 "name": "BaseBdev1", 00:17:46.700 "uuid": "41f856b6-39d1-454a-aaaf-18ee142c7554", 00:17:46.700 "is_configured": true, 00:17:46.700 "data_offset": 2048, 00:17:46.700 "data_size": 63488 00:17:46.700 }, 00:17:46.700 { 00:17:46.700 "name": "BaseBdev2", 00:17:46.700 "uuid": "31450079-49a8-41b5-b729-69dc67f9e76b", 00:17:46.700 "is_configured": true, 00:17:46.700 "data_offset": 2048, 00:17:46.700 "data_size": 63488 00:17:46.700 }, 00:17:46.700 { 00:17:46.700 "name": "BaseBdev3", 00:17:46.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.700 "is_configured": false, 00:17:46.700 "data_offset": 0, 00:17:46.700 "data_size": 0 00:17:46.700 }, 00:17:46.700 { 00:17:46.700 "name": "BaseBdev4", 00:17:46.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.700 "is_configured": false, 00:17:46.700 "data_offset": 0, 00:17:46.700 "data_size": 0 00:17:46.700 } 00:17:46.700 ] 00:17:46.700 }' 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.700 14:34:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.267 [2024-11-20 14:34:48.153190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.267 BaseBdev3 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.267 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.267 [ 00:17:47.267 { 00:17:47.267 "name": "BaseBdev3", 00:17:47.267 "aliases": [ 00:17:47.267 "c63295c9-efb4-4165-949c-6b11e712f6f0" 00:17:47.267 ], 00:17:47.267 "product_name": "Malloc disk", 00:17:47.267 "block_size": 512, 00:17:47.267 "num_blocks": 65536, 00:17:47.267 "uuid": "c63295c9-efb4-4165-949c-6b11e712f6f0", 00:17:47.267 "assigned_rate_limits": { 00:17:47.267 "rw_ios_per_sec": 0, 00:17:47.267 "rw_mbytes_per_sec": 0, 00:17:47.267 "r_mbytes_per_sec": 0, 00:17:47.267 "w_mbytes_per_sec": 0 00:17:47.267 }, 00:17:47.267 "claimed": true, 00:17:47.267 "claim_type": "exclusive_write", 00:17:47.267 "zoned": false, 00:17:47.267 "supported_io_types": { 00:17:47.267 "read": true, 00:17:47.267 "write": true, 00:17:47.267 "unmap": true, 00:17:47.267 "flush": true, 00:17:47.267 "reset": true, 00:17:47.267 "nvme_admin": false, 00:17:47.267 "nvme_io": false, 00:17:47.267 "nvme_io_md": false, 00:17:47.267 "write_zeroes": true, 00:17:47.267 "zcopy": true, 00:17:47.267 "get_zone_info": false, 00:17:47.267 "zone_management": false, 00:17:47.267 "zone_append": false, 00:17:47.268 "compare": false, 00:17:47.268 "compare_and_write": false, 00:17:47.268 "abort": true, 00:17:47.268 "seek_hole": false, 00:17:47.268 "seek_data": false, 00:17:47.268 "copy": true, 00:17:47.268 "nvme_iov_md": false 00:17:47.268 }, 00:17:47.268 "memory_domains": [ 00:17:47.268 { 00:17:47.268 "dma_device_id": "system", 00:17:47.268 "dma_device_type": 1 00:17:47.268 }, 00:17:47.268 { 00:17:47.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.268 "dma_device_type": 2 00:17:47.268 } 00:17:47.268 ], 00:17:47.268 "driver_specific": {} 00:17:47.268 } 00:17:47.268 ] 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.268 "name": "Existed_Raid", 00:17:47.268 "uuid": "58db5703-8743-4436-80bd-55a126ff2598", 00:17:47.268 "strip_size_kb": 64, 00:17:47.268 "state": "configuring", 00:17:47.268 "raid_level": "raid5f", 00:17:47.268 "superblock": true, 00:17:47.268 "num_base_bdevs": 4, 00:17:47.268 "num_base_bdevs_discovered": 3, 00:17:47.268 "num_base_bdevs_operational": 4, 00:17:47.268 "base_bdevs_list": [ 00:17:47.268 { 00:17:47.268 "name": "BaseBdev1", 00:17:47.268 "uuid": "41f856b6-39d1-454a-aaaf-18ee142c7554", 00:17:47.268 "is_configured": true, 00:17:47.268 "data_offset": 2048, 00:17:47.268 "data_size": 63488 00:17:47.268 }, 00:17:47.268 { 00:17:47.268 "name": "BaseBdev2", 00:17:47.268 "uuid": "31450079-49a8-41b5-b729-69dc67f9e76b", 00:17:47.268 "is_configured": true, 00:17:47.268 "data_offset": 2048, 00:17:47.268 "data_size": 63488 00:17:47.268 }, 00:17:47.268 { 00:17:47.268 "name": "BaseBdev3", 00:17:47.268 "uuid": "c63295c9-efb4-4165-949c-6b11e712f6f0", 00:17:47.268 "is_configured": true, 00:17:47.268 "data_offset": 2048, 00:17:47.268 "data_size": 63488 00:17:47.268 }, 00:17:47.268 { 00:17:47.268 "name": "BaseBdev4", 00:17:47.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.268 "is_configured": false, 00:17:47.268 "data_offset": 0, 00:17:47.268 "data_size": 0 00:17:47.268 } 00:17:47.268 ] 00:17:47.268 }' 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.268 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 [2024-11-20 14:34:48.761813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.834 [2024-11-20 14:34:48.762286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:47.834 [2024-11-20 14:34:48.762310] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:47.834 BaseBdev4 00:17:47.834 [2024-11-20 14:34:48.762736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 [2024-11-20 14:34:48.769856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:47.834 [2024-11-20 14:34:48.770142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:47.834 [2024-11-20 14:34:48.770742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.834 [ 00:17:47.834 { 00:17:47.834 "name": "BaseBdev4", 00:17:47.834 "aliases": [ 00:17:47.834 "edad415a-5f6c-48bb-93b1-fd0b15c7db95" 00:17:47.834 ], 00:17:47.834 "product_name": "Malloc disk", 00:17:47.834 "block_size": 512, 00:17:47.834 "num_blocks": 65536, 00:17:47.834 "uuid": "edad415a-5f6c-48bb-93b1-fd0b15c7db95", 00:17:47.834 "assigned_rate_limits": { 00:17:47.834 "rw_ios_per_sec": 0, 00:17:47.834 "rw_mbytes_per_sec": 0, 00:17:47.834 "r_mbytes_per_sec": 0, 00:17:47.834 "w_mbytes_per_sec": 0 00:17:47.834 }, 00:17:47.834 "claimed": true, 00:17:47.834 "claim_type": "exclusive_write", 00:17:47.834 "zoned": false, 00:17:47.834 "supported_io_types": { 00:17:47.834 "read": true, 00:17:47.834 "write": true, 00:17:47.834 "unmap": true, 00:17:47.834 "flush": true, 00:17:47.834 "reset": true, 00:17:47.834 "nvme_admin": false, 00:17:47.834 "nvme_io": false, 00:17:47.834 "nvme_io_md": false, 00:17:47.834 "write_zeroes": true, 00:17:47.834 "zcopy": true, 00:17:47.834 "get_zone_info": false, 00:17:47.834 "zone_management": false, 00:17:47.834 "zone_append": false, 00:17:47.834 "compare": false, 00:17:47.834 "compare_and_write": false, 00:17:47.834 "abort": true, 00:17:47.834 "seek_hole": false, 00:17:47.834 "seek_data": false, 00:17:47.834 "copy": true, 00:17:47.834 "nvme_iov_md": false 00:17:47.834 }, 00:17:47.834 "memory_domains": [ 00:17:47.834 { 00:17:47.834 "dma_device_id": "system", 00:17:47.834 "dma_device_type": 1 00:17:47.834 }, 00:17:47.834 { 00:17:47.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.834 "dma_device_type": 2 00:17:47.834 } 00:17:47.834 ], 00:17:47.834 "driver_specific": {} 00:17:47.834 } 00:17:47.834 ] 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.834 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.835 "name": "Existed_Raid", 00:17:47.835 "uuid": "58db5703-8743-4436-80bd-55a126ff2598", 00:17:47.835 "strip_size_kb": 64, 00:17:47.835 "state": "online", 00:17:47.835 "raid_level": "raid5f", 00:17:47.835 "superblock": true, 00:17:47.835 "num_base_bdevs": 4, 00:17:47.835 "num_base_bdevs_discovered": 4, 00:17:47.835 "num_base_bdevs_operational": 4, 00:17:47.835 "base_bdevs_list": [ 00:17:47.835 { 00:17:47.835 "name": "BaseBdev1", 00:17:47.835 "uuid": "41f856b6-39d1-454a-aaaf-18ee142c7554", 00:17:47.835 "is_configured": true, 00:17:47.835 "data_offset": 2048, 00:17:47.835 "data_size": 63488 00:17:47.835 }, 00:17:47.835 { 00:17:47.835 "name": "BaseBdev2", 00:17:47.835 "uuid": "31450079-49a8-41b5-b729-69dc67f9e76b", 00:17:47.835 "is_configured": true, 00:17:47.835 "data_offset": 2048, 00:17:47.835 "data_size": 63488 00:17:47.835 }, 00:17:47.835 { 00:17:47.835 "name": "BaseBdev3", 00:17:47.835 "uuid": "c63295c9-efb4-4165-949c-6b11e712f6f0", 00:17:47.835 "is_configured": true, 00:17:47.835 "data_offset": 2048, 00:17:47.835 "data_size": 63488 00:17:47.835 }, 00:17:47.835 { 00:17:47.835 "name": "BaseBdev4", 00:17:47.835 "uuid": "edad415a-5f6c-48bb-93b1-fd0b15c7db95", 00:17:47.835 "is_configured": true, 00:17:47.835 "data_offset": 2048, 00:17:47.835 "data_size": 63488 00:17:47.835 } 00:17:47.835 ] 00:17:47.835 }' 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.835 14:34:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.400 [2024-11-20 14:34:49.371519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.400 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.400 "name": "Existed_Raid", 00:17:48.400 "aliases": [ 00:17:48.400 "58db5703-8743-4436-80bd-55a126ff2598" 00:17:48.400 ], 00:17:48.400 "product_name": "Raid Volume", 00:17:48.400 "block_size": 512, 00:17:48.400 "num_blocks": 190464, 00:17:48.400 "uuid": "58db5703-8743-4436-80bd-55a126ff2598", 00:17:48.400 "assigned_rate_limits": { 00:17:48.400 "rw_ios_per_sec": 0, 00:17:48.400 "rw_mbytes_per_sec": 0, 00:17:48.400 "r_mbytes_per_sec": 0, 00:17:48.400 "w_mbytes_per_sec": 0 00:17:48.400 }, 00:17:48.400 "claimed": false, 00:17:48.400 "zoned": false, 00:17:48.400 "supported_io_types": { 00:17:48.400 "read": true, 00:17:48.400 "write": true, 00:17:48.400 "unmap": false, 00:17:48.400 "flush": false, 00:17:48.400 "reset": true, 00:17:48.400 "nvme_admin": false, 00:17:48.400 "nvme_io": false, 00:17:48.400 "nvme_io_md": false, 00:17:48.400 "write_zeroes": true, 00:17:48.400 "zcopy": false, 00:17:48.400 "get_zone_info": false, 00:17:48.400 "zone_management": false, 00:17:48.401 "zone_append": false, 00:17:48.401 "compare": false, 00:17:48.401 "compare_and_write": false, 00:17:48.401 "abort": false, 00:17:48.401 "seek_hole": false, 00:17:48.401 "seek_data": false, 00:17:48.401 "copy": false, 00:17:48.401 "nvme_iov_md": false 00:17:48.401 }, 00:17:48.401 "driver_specific": { 00:17:48.401 "raid": { 00:17:48.401 "uuid": "58db5703-8743-4436-80bd-55a126ff2598", 00:17:48.401 "strip_size_kb": 64, 00:17:48.401 "state": "online", 00:17:48.401 "raid_level": "raid5f", 00:17:48.401 "superblock": true, 00:17:48.401 "num_base_bdevs": 4, 00:17:48.401 "num_base_bdevs_discovered": 4, 00:17:48.401 "num_base_bdevs_operational": 4, 00:17:48.401 "base_bdevs_list": [ 00:17:48.401 { 00:17:48.401 "name": "BaseBdev1", 00:17:48.401 "uuid": "41f856b6-39d1-454a-aaaf-18ee142c7554", 00:17:48.401 "is_configured": true, 00:17:48.401 "data_offset": 2048, 00:17:48.401 "data_size": 63488 00:17:48.401 }, 00:17:48.401 { 00:17:48.401 "name": "BaseBdev2", 00:17:48.401 "uuid": "31450079-49a8-41b5-b729-69dc67f9e76b", 00:17:48.401 "is_configured": true, 00:17:48.401 "data_offset": 2048, 00:17:48.401 "data_size": 63488 00:17:48.401 }, 00:17:48.401 { 00:17:48.401 "name": "BaseBdev3", 00:17:48.401 "uuid": "c63295c9-efb4-4165-949c-6b11e712f6f0", 00:17:48.401 "is_configured": true, 00:17:48.401 "data_offset": 2048, 00:17:48.401 "data_size": 63488 00:17:48.401 }, 00:17:48.401 { 00:17:48.401 "name": "BaseBdev4", 00:17:48.401 "uuid": "edad415a-5f6c-48bb-93b1-fd0b15c7db95", 00:17:48.401 "is_configured": true, 00:17:48.401 "data_offset": 2048, 00:17:48.401 "data_size": 63488 00:17:48.401 } 00:17:48.401 ] 00:17:48.401 } 00:17:48.401 } 00:17:48.401 }' 00:17:48.401 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:48.659 BaseBdev2 00:17:48.659 BaseBdev3 00:17:48.659 BaseBdev4' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.659 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.918 [2024-11-20 14:34:49.767380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.918 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.919 "name": "Existed_Raid", 00:17:48.919 "uuid": "58db5703-8743-4436-80bd-55a126ff2598", 00:17:48.919 "strip_size_kb": 64, 00:17:48.919 "state": "online", 00:17:48.919 "raid_level": "raid5f", 00:17:48.919 "superblock": true, 00:17:48.919 "num_base_bdevs": 4, 00:17:48.919 "num_base_bdevs_discovered": 3, 00:17:48.919 "num_base_bdevs_operational": 3, 00:17:48.919 "base_bdevs_list": [ 00:17:48.919 { 00:17:48.919 "name": null, 00:17:48.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.919 "is_configured": false, 00:17:48.919 "data_offset": 0, 00:17:48.919 "data_size": 63488 00:17:48.919 }, 00:17:48.919 { 00:17:48.919 "name": "BaseBdev2", 00:17:48.919 "uuid": "31450079-49a8-41b5-b729-69dc67f9e76b", 00:17:48.919 "is_configured": true, 00:17:48.919 "data_offset": 2048, 00:17:48.919 "data_size": 63488 00:17:48.919 }, 00:17:48.919 { 00:17:48.919 "name": "BaseBdev3", 00:17:48.919 "uuid": "c63295c9-efb4-4165-949c-6b11e712f6f0", 00:17:48.919 "is_configured": true, 00:17:48.919 "data_offset": 2048, 00:17:48.919 "data_size": 63488 00:17:48.919 }, 00:17:48.919 { 00:17:48.919 "name": "BaseBdev4", 00:17:48.919 "uuid": "edad415a-5f6c-48bb-93b1-fd0b15c7db95", 00:17:48.919 "is_configured": true, 00:17:48.919 "data_offset": 2048, 00:17:48.919 "data_size": 63488 00:17:48.919 } 00:17:48.919 ] 00:17:48.919 }' 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.919 14:34:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.485 [2024-11-20 14:34:50.421677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.485 [2024-11-20 14:34:50.421971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.485 [2024-11-20 14:34:50.512717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.485 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.743 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:49.743 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.743 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:49.743 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.743 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.743 [2024-11-20 14:34:50.576756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.743 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.744 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.744 [2024-11-20 14:34:50.727890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:49.744 [2024-11-20 14:34:50.727962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.002 BaseBdev2 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.002 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.002 [ 00:17:50.002 { 00:17:50.002 "name": "BaseBdev2", 00:17:50.002 "aliases": [ 00:17:50.002 "5a37c7a5-29af-43a5-9445-fea46a67223e" 00:17:50.002 ], 00:17:50.002 "product_name": "Malloc disk", 00:17:50.002 "block_size": 512, 00:17:50.002 "num_blocks": 65536, 00:17:50.002 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:50.002 "assigned_rate_limits": { 00:17:50.002 "rw_ios_per_sec": 0, 00:17:50.002 "rw_mbytes_per_sec": 0, 00:17:50.002 "r_mbytes_per_sec": 0, 00:17:50.002 "w_mbytes_per_sec": 0 00:17:50.002 }, 00:17:50.002 "claimed": false, 00:17:50.002 "zoned": false, 00:17:50.002 "supported_io_types": { 00:17:50.002 "read": true, 00:17:50.002 "write": true, 00:17:50.002 "unmap": true, 00:17:50.002 "flush": true, 00:17:50.002 "reset": true, 00:17:50.002 "nvme_admin": false, 00:17:50.002 "nvme_io": false, 00:17:50.002 "nvme_io_md": false, 00:17:50.002 "write_zeroes": true, 00:17:50.002 "zcopy": true, 00:17:50.002 "get_zone_info": false, 00:17:50.002 "zone_management": false, 00:17:50.002 "zone_append": false, 00:17:50.002 "compare": false, 00:17:50.002 "compare_and_write": false, 00:17:50.002 "abort": true, 00:17:50.002 "seek_hole": false, 00:17:50.002 "seek_data": false, 00:17:50.002 "copy": true, 00:17:50.002 "nvme_iov_md": false 00:17:50.002 }, 00:17:50.002 "memory_domains": [ 00:17:50.002 { 00:17:50.002 "dma_device_id": "system", 00:17:50.002 "dma_device_type": 1 00:17:50.002 }, 00:17:50.002 { 00:17:50.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.002 "dma_device_type": 2 00:17:50.002 } 00:17:50.002 ], 00:17:50.002 "driver_specific": {} 00:17:50.003 } 00:17:50.003 ] 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.003 BaseBdev3 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.003 14:34:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.003 [ 00:17:50.003 { 00:17:50.003 "name": "BaseBdev3", 00:17:50.003 "aliases": [ 00:17:50.003 "b2295e53-f385-45e4-98ff-ccaf4661d8e9" 00:17:50.003 ], 00:17:50.003 "product_name": "Malloc disk", 00:17:50.003 "block_size": 512, 00:17:50.003 "num_blocks": 65536, 00:17:50.003 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:50.003 "assigned_rate_limits": { 00:17:50.003 "rw_ios_per_sec": 0, 00:17:50.003 "rw_mbytes_per_sec": 0, 00:17:50.003 "r_mbytes_per_sec": 0, 00:17:50.003 "w_mbytes_per_sec": 0 00:17:50.003 }, 00:17:50.003 "claimed": false, 00:17:50.003 "zoned": false, 00:17:50.003 "supported_io_types": { 00:17:50.003 "read": true, 00:17:50.003 "write": true, 00:17:50.003 "unmap": true, 00:17:50.003 "flush": true, 00:17:50.003 "reset": true, 00:17:50.003 "nvme_admin": false, 00:17:50.003 "nvme_io": false, 00:17:50.003 "nvme_io_md": false, 00:17:50.003 "write_zeroes": true, 00:17:50.003 "zcopy": true, 00:17:50.003 "get_zone_info": false, 00:17:50.003 "zone_management": false, 00:17:50.003 "zone_append": false, 00:17:50.003 "compare": false, 00:17:50.003 "compare_and_write": false, 00:17:50.003 "abort": true, 00:17:50.003 "seek_hole": false, 00:17:50.003 "seek_data": false, 00:17:50.003 "copy": true, 00:17:50.003 "nvme_iov_md": false 00:17:50.003 }, 00:17:50.003 "memory_domains": [ 00:17:50.003 { 00:17:50.003 "dma_device_id": "system", 00:17:50.003 "dma_device_type": 1 00:17:50.003 }, 00:17:50.003 { 00:17:50.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.003 "dma_device_type": 2 00:17:50.003 } 00:17:50.003 ], 00:17:50.003 "driver_specific": {} 00:17:50.003 } 00:17:50.003 ] 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.003 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 BaseBdev4 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 [ 00:17:50.262 { 00:17:50.262 "name": "BaseBdev4", 00:17:50.262 "aliases": [ 00:17:50.262 "a811317b-35c5-495a-a361-70dadc14d510" 00:17:50.262 ], 00:17:50.262 "product_name": "Malloc disk", 00:17:50.262 "block_size": 512, 00:17:50.262 "num_blocks": 65536, 00:17:50.262 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:50.262 "assigned_rate_limits": { 00:17:50.262 "rw_ios_per_sec": 0, 00:17:50.262 "rw_mbytes_per_sec": 0, 00:17:50.262 "r_mbytes_per_sec": 0, 00:17:50.262 "w_mbytes_per_sec": 0 00:17:50.262 }, 00:17:50.262 "claimed": false, 00:17:50.262 "zoned": false, 00:17:50.262 "supported_io_types": { 00:17:50.262 "read": true, 00:17:50.262 "write": true, 00:17:50.262 "unmap": true, 00:17:50.262 "flush": true, 00:17:50.262 "reset": true, 00:17:50.262 "nvme_admin": false, 00:17:50.262 "nvme_io": false, 00:17:50.262 "nvme_io_md": false, 00:17:50.262 "write_zeroes": true, 00:17:50.262 "zcopy": true, 00:17:50.262 "get_zone_info": false, 00:17:50.262 "zone_management": false, 00:17:50.262 "zone_append": false, 00:17:50.262 "compare": false, 00:17:50.262 "compare_and_write": false, 00:17:50.262 "abort": true, 00:17:50.262 "seek_hole": false, 00:17:50.262 "seek_data": false, 00:17:50.262 "copy": true, 00:17:50.262 "nvme_iov_md": false 00:17:50.262 }, 00:17:50.262 "memory_domains": [ 00:17:50.262 { 00:17:50.262 "dma_device_id": "system", 00:17:50.262 "dma_device_type": 1 00:17:50.262 }, 00:17:50.262 { 00:17:50.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.262 "dma_device_type": 2 00:17:50.262 } 00:17:50.262 ], 00:17:50.262 "driver_specific": {} 00:17:50.262 } 00:17:50.262 ] 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 [2024-11-20 14:34:51.117769] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.262 [2024-11-20 14:34:51.117829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.262 [2024-11-20 14:34:51.117863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.262 [2024-11-20 14:34:51.120432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.262 [2024-11-20 14:34:51.120511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.263 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.263 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.263 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.263 "name": "Existed_Raid", 00:17:50.263 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:50.263 "strip_size_kb": 64, 00:17:50.263 "state": "configuring", 00:17:50.263 "raid_level": "raid5f", 00:17:50.263 "superblock": true, 00:17:50.263 "num_base_bdevs": 4, 00:17:50.263 "num_base_bdevs_discovered": 3, 00:17:50.263 "num_base_bdevs_operational": 4, 00:17:50.263 "base_bdevs_list": [ 00:17:50.263 { 00:17:50.263 "name": "BaseBdev1", 00:17:50.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.263 "is_configured": false, 00:17:50.263 "data_offset": 0, 00:17:50.263 "data_size": 0 00:17:50.263 }, 00:17:50.263 { 00:17:50.263 "name": "BaseBdev2", 00:17:50.263 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:50.263 "is_configured": true, 00:17:50.263 "data_offset": 2048, 00:17:50.263 "data_size": 63488 00:17:50.263 }, 00:17:50.263 { 00:17:50.263 "name": "BaseBdev3", 00:17:50.263 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:50.263 "is_configured": true, 00:17:50.263 "data_offset": 2048, 00:17:50.263 "data_size": 63488 00:17:50.263 }, 00:17:50.263 { 00:17:50.263 "name": "BaseBdev4", 00:17:50.263 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:50.263 "is_configured": true, 00:17:50.263 "data_offset": 2048, 00:17:50.263 "data_size": 63488 00:17:50.263 } 00:17:50.263 ] 00:17:50.263 }' 00:17:50.263 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.263 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.828 [2024-11-20 14:34:51.641995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.828 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.829 "name": "Existed_Raid", 00:17:50.829 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:50.829 "strip_size_kb": 64, 00:17:50.829 "state": "configuring", 00:17:50.829 "raid_level": "raid5f", 00:17:50.829 "superblock": true, 00:17:50.829 "num_base_bdevs": 4, 00:17:50.829 "num_base_bdevs_discovered": 2, 00:17:50.829 "num_base_bdevs_operational": 4, 00:17:50.829 "base_bdevs_list": [ 00:17:50.829 { 00:17:50.829 "name": "BaseBdev1", 00:17:50.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.829 "is_configured": false, 00:17:50.829 "data_offset": 0, 00:17:50.829 "data_size": 0 00:17:50.829 }, 00:17:50.829 { 00:17:50.829 "name": null, 00:17:50.829 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:50.829 "is_configured": false, 00:17:50.829 "data_offset": 0, 00:17:50.829 "data_size": 63488 00:17:50.829 }, 00:17:50.829 { 00:17:50.829 "name": "BaseBdev3", 00:17:50.829 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:50.829 "is_configured": true, 00:17:50.829 "data_offset": 2048, 00:17:50.829 "data_size": 63488 00:17:50.829 }, 00:17:50.829 { 00:17:50.829 "name": "BaseBdev4", 00:17:50.829 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:50.829 "is_configured": true, 00:17:50.829 "data_offset": 2048, 00:17:50.829 "data_size": 63488 00:17:50.829 } 00:17:50.829 ] 00:17:50.829 }' 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.829 14:34:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 [2024-11-20 14:34:52.257395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.395 BaseBdev1 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 [ 00:17:51.395 { 00:17:51.395 "name": "BaseBdev1", 00:17:51.395 "aliases": [ 00:17:51.395 "abd6957c-f50b-4ef2-b037-b16432b0adb9" 00:17:51.395 ], 00:17:51.395 "product_name": "Malloc disk", 00:17:51.395 "block_size": 512, 00:17:51.395 "num_blocks": 65536, 00:17:51.395 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:51.395 "assigned_rate_limits": { 00:17:51.395 "rw_ios_per_sec": 0, 00:17:51.395 "rw_mbytes_per_sec": 0, 00:17:51.395 "r_mbytes_per_sec": 0, 00:17:51.395 "w_mbytes_per_sec": 0 00:17:51.395 }, 00:17:51.395 "claimed": true, 00:17:51.395 "claim_type": "exclusive_write", 00:17:51.395 "zoned": false, 00:17:51.395 "supported_io_types": { 00:17:51.395 "read": true, 00:17:51.395 "write": true, 00:17:51.395 "unmap": true, 00:17:51.395 "flush": true, 00:17:51.395 "reset": true, 00:17:51.395 "nvme_admin": false, 00:17:51.395 "nvme_io": false, 00:17:51.395 "nvme_io_md": false, 00:17:51.395 "write_zeroes": true, 00:17:51.395 "zcopy": true, 00:17:51.395 "get_zone_info": false, 00:17:51.395 "zone_management": false, 00:17:51.395 "zone_append": false, 00:17:51.395 "compare": false, 00:17:51.395 "compare_and_write": false, 00:17:51.395 "abort": true, 00:17:51.395 "seek_hole": false, 00:17:51.395 "seek_data": false, 00:17:51.395 "copy": true, 00:17:51.395 "nvme_iov_md": false 00:17:51.395 }, 00:17:51.395 "memory_domains": [ 00:17:51.395 { 00:17:51.395 "dma_device_id": "system", 00:17:51.395 "dma_device_type": 1 00:17:51.395 }, 00:17:51.395 { 00:17:51.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.395 "dma_device_type": 2 00:17:51.395 } 00:17:51.395 ], 00:17:51.395 "driver_specific": {} 00:17:51.395 } 00:17:51.395 ] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.395 "name": "Existed_Raid", 00:17:51.395 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:51.395 "strip_size_kb": 64, 00:17:51.395 "state": "configuring", 00:17:51.395 "raid_level": "raid5f", 00:17:51.395 "superblock": true, 00:17:51.395 "num_base_bdevs": 4, 00:17:51.395 "num_base_bdevs_discovered": 3, 00:17:51.395 "num_base_bdevs_operational": 4, 00:17:51.395 "base_bdevs_list": [ 00:17:51.395 { 00:17:51.395 "name": "BaseBdev1", 00:17:51.395 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:51.395 "is_configured": true, 00:17:51.395 "data_offset": 2048, 00:17:51.395 "data_size": 63488 00:17:51.395 }, 00:17:51.395 { 00:17:51.395 "name": null, 00:17:51.395 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:51.395 "is_configured": false, 00:17:51.395 "data_offset": 0, 00:17:51.395 "data_size": 63488 00:17:51.395 }, 00:17:51.395 { 00:17:51.395 "name": "BaseBdev3", 00:17:51.395 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:51.395 "is_configured": true, 00:17:51.395 "data_offset": 2048, 00:17:51.395 "data_size": 63488 00:17:51.395 }, 00:17:51.395 { 00:17:51.395 "name": "BaseBdev4", 00:17:51.395 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:51.395 "is_configured": true, 00:17:51.395 "data_offset": 2048, 00:17:51.395 "data_size": 63488 00:17:51.395 } 00:17:51.395 ] 00:17:51.395 }' 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.395 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.960 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.960 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.960 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.961 [2024-11-20 14:34:52.853658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.961 "name": "Existed_Raid", 00:17:51.961 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:51.961 "strip_size_kb": 64, 00:17:51.961 "state": "configuring", 00:17:51.961 "raid_level": "raid5f", 00:17:51.961 "superblock": true, 00:17:51.961 "num_base_bdevs": 4, 00:17:51.961 "num_base_bdevs_discovered": 2, 00:17:51.961 "num_base_bdevs_operational": 4, 00:17:51.961 "base_bdevs_list": [ 00:17:51.961 { 00:17:51.961 "name": "BaseBdev1", 00:17:51.961 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:51.961 "is_configured": true, 00:17:51.961 "data_offset": 2048, 00:17:51.961 "data_size": 63488 00:17:51.961 }, 00:17:51.961 { 00:17:51.961 "name": null, 00:17:51.961 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:51.961 "is_configured": false, 00:17:51.961 "data_offset": 0, 00:17:51.961 "data_size": 63488 00:17:51.961 }, 00:17:51.961 { 00:17:51.961 "name": null, 00:17:51.961 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:51.961 "is_configured": false, 00:17:51.961 "data_offset": 0, 00:17:51.961 "data_size": 63488 00:17:51.961 }, 00:17:51.961 { 00:17:51.961 "name": "BaseBdev4", 00:17:51.961 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:51.961 "is_configured": true, 00:17:51.961 "data_offset": 2048, 00:17:51.961 "data_size": 63488 00:17:51.961 } 00:17:51.961 ] 00:17:51.961 }' 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.961 14:34:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.527 [2024-11-20 14:34:53.433793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.527 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.527 "name": "Existed_Raid", 00:17:52.527 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:52.527 "strip_size_kb": 64, 00:17:52.528 "state": "configuring", 00:17:52.528 "raid_level": "raid5f", 00:17:52.528 "superblock": true, 00:17:52.528 "num_base_bdevs": 4, 00:17:52.528 "num_base_bdevs_discovered": 3, 00:17:52.528 "num_base_bdevs_operational": 4, 00:17:52.528 "base_bdevs_list": [ 00:17:52.528 { 00:17:52.528 "name": "BaseBdev1", 00:17:52.528 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:52.528 "is_configured": true, 00:17:52.528 "data_offset": 2048, 00:17:52.528 "data_size": 63488 00:17:52.528 }, 00:17:52.528 { 00:17:52.528 "name": null, 00:17:52.528 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:52.528 "is_configured": false, 00:17:52.528 "data_offset": 0, 00:17:52.528 "data_size": 63488 00:17:52.528 }, 00:17:52.528 { 00:17:52.528 "name": "BaseBdev3", 00:17:52.528 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:52.528 "is_configured": true, 00:17:52.528 "data_offset": 2048, 00:17:52.528 "data_size": 63488 00:17:52.528 }, 00:17:52.528 { 00:17:52.528 "name": "BaseBdev4", 00:17:52.528 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:52.528 "is_configured": true, 00:17:52.528 "data_offset": 2048, 00:17:52.528 "data_size": 63488 00:17:52.528 } 00:17:52.528 ] 00:17:52.528 }' 00:17:52.528 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.528 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.096 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:53.096 14:34:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.096 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.096 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.096 14:34:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.096 [2024-11-20 14:34:54.018037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.096 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.411 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.411 "name": "Existed_Raid", 00:17:53.411 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:53.411 "strip_size_kb": 64, 00:17:53.411 "state": "configuring", 00:17:53.411 "raid_level": "raid5f", 00:17:53.411 "superblock": true, 00:17:53.411 "num_base_bdevs": 4, 00:17:53.411 "num_base_bdevs_discovered": 2, 00:17:53.411 "num_base_bdevs_operational": 4, 00:17:53.411 "base_bdevs_list": [ 00:17:53.411 { 00:17:53.411 "name": null, 00:17:53.411 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:53.411 "is_configured": false, 00:17:53.411 "data_offset": 0, 00:17:53.411 "data_size": 63488 00:17:53.411 }, 00:17:53.411 { 00:17:53.411 "name": null, 00:17:53.411 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:53.411 "is_configured": false, 00:17:53.411 "data_offset": 0, 00:17:53.411 "data_size": 63488 00:17:53.411 }, 00:17:53.411 { 00:17:53.411 "name": "BaseBdev3", 00:17:53.411 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:53.411 "is_configured": true, 00:17:53.411 "data_offset": 2048, 00:17:53.411 "data_size": 63488 00:17:53.411 }, 00:17:53.411 { 00:17:53.411 "name": "BaseBdev4", 00:17:53.411 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:53.411 "is_configured": true, 00:17:53.411 "data_offset": 2048, 00:17:53.411 "data_size": 63488 00:17:53.411 } 00:17:53.411 ] 00:17:53.411 }' 00:17:53.411 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.411 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.669 [2024-11-20 14:34:54.666979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.669 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.670 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.927 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.927 "name": "Existed_Raid", 00:17:53.927 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:53.927 "strip_size_kb": 64, 00:17:53.927 "state": "configuring", 00:17:53.927 "raid_level": "raid5f", 00:17:53.927 "superblock": true, 00:17:53.927 "num_base_bdevs": 4, 00:17:53.927 "num_base_bdevs_discovered": 3, 00:17:53.927 "num_base_bdevs_operational": 4, 00:17:53.927 "base_bdevs_list": [ 00:17:53.927 { 00:17:53.927 "name": null, 00:17:53.927 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:53.927 "is_configured": false, 00:17:53.927 "data_offset": 0, 00:17:53.927 "data_size": 63488 00:17:53.927 }, 00:17:53.927 { 00:17:53.927 "name": "BaseBdev2", 00:17:53.927 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:53.927 "is_configured": true, 00:17:53.927 "data_offset": 2048, 00:17:53.927 "data_size": 63488 00:17:53.927 }, 00:17:53.927 { 00:17:53.927 "name": "BaseBdev3", 00:17:53.927 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:53.927 "is_configured": true, 00:17:53.927 "data_offset": 2048, 00:17:53.927 "data_size": 63488 00:17:53.927 }, 00:17:53.927 { 00:17:53.927 "name": "BaseBdev4", 00:17:53.927 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:53.927 "is_configured": true, 00:17:53.927 "data_offset": 2048, 00:17:53.927 "data_size": 63488 00:17:53.927 } 00:17:53.927 ] 00:17:53.927 }' 00:17:53.928 14:34:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.928 14:34:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.186 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u abd6957c-f50b-4ef2-b037-b16432b0adb9 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.444 [2024-11-20 14:34:55.327493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:54.444 [2024-11-20 14:34:55.328058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:54.444 [2024-11-20 14:34:55.328085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:54.444 NewBaseBdev 00:17:54.444 [2024-11-20 14:34:55.328438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.444 [2024-11-20 14:34:55.334953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:54.444 [2024-11-20 14:34:55.334985] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:54.444 [2024-11-20 14:34:55.335281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.444 [ 00:17:54.444 { 00:17:54.444 "name": "NewBaseBdev", 00:17:54.444 "aliases": [ 00:17:54.444 "abd6957c-f50b-4ef2-b037-b16432b0adb9" 00:17:54.444 ], 00:17:54.444 "product_name": "Malloc disk", 00:17:54.444 "block_size": 512, 00:17:54.444 "num_blocks": 65536, 00:17:54.444 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:54.444 "assigned_rate_limits": { 00:17:54.444 "rw_ios_per_sec": 0, 00:17:54.444 "rw_mbytes_per_sec": 0, 00:17:54.444 "r_mbytes_per_sec": 0, 00:17:54.444 "w_mbytes_per_sec": 0 00:17:54.444 }, 00:17:54.444 "claimed": true, 00:17:54.444 "claim_type": "exclusive_write", 00:17:54.444 "zoned": false, 00:17:54.444 "supported_io_types": { 00:17:54.444 "read": true, 00:17:54.444 "write": true, 00:17:54.444 "unmap": true, 00:17:54.444 "flush": true, 00:17:54.444 "reset": true, 00:17:54.444 "nvme_admin": false, 00:17:54.444 "nvme_io": false, 00:17:54.444 "nvme_io_md": false, 00:17:54.444 "write_zeroes": true, 00:17:54.444 "zcopy": true, 00:17:54.444 "get_zone_info": false, 00:17:54.444 "zone_management": false, 00:17:54.444 "zone_append": false, 00:17:54.444 "compare": false, 00:17:54.444 "compare_and_write": false, 00:17:54.444 "abort": true, 00:17:54.444 "seek_hole": false, 00:17:54.444 "seek_data": false, 00:17:54.444 "copy": true, 00:17:54.444 "nvme_iov_md": false 00:17:54.444 }, 00:17:54.444 "memory_domains": [ 00:17:54.444 { 00:17:54.444 "dma_device_id": "system", 00:17:54.444 "dma_device_type": 1 00:17:54.444 }, 00:17:54.444 { 00:17:54.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.444 "dma_device_type": 2 00:17:54.444 } 00:17:54.444 ], 00:17:54.444 "driver_specific": {} 00:17:54.444 } 00:17:54.444 ] 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.444 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.445 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.445 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.445 "name": "Existed_Raid", 00:17:54.445 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:54.445 "strip_size_kb": 64, 00:17:54.445 "state": "online", 00:17:54.445 "raid_level": "raid5f", 00:17:54.445 "superblock": true, 00:17:54.445 "num_base_bdevs": 4, 00:17:54.445 "num_base_bdevs_discovered": 4, 00:17:54.445 "num_base_bdevs_operational": 4, 00:17:54.445 "base_bdevs_list": [ 00:17:54.445 { 00:17:54.445 "name": "NewBaseBdev", 00:17:54.445 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 2048, 00:17:54.445 "data_size": 63488 00:17:54.445 }, 00:17:54.445 { 00:17:54.445 "name": "BaseBdev2", 00:17:54.445 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 2048, 00:17:54.445 "data_size": 63488 00:17:54.445 }, 00:17:54.445 { 00:17:54.445 "name": "BaseBdev3", 00:17:54.445 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 2048, 00:17:54.445 "data_size": 63488 00:17:54.445 }, 00:17:54.445 { 00:17:54.445 "name": "BaseBdev4", 00:17:54.445 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:54.445 "is_configured": true, 00:17:54.445 "data_offset": 2048, 00:17:54.445 "data_size": 63488 00:17:54.445 } 00:17:54.445 ] 00:17:54.445 }' 00:17:54.445 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.445 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.012 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.013 [2024-11-20 14:34:55.884009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.013 "name": "Existed_Raid", 00:17:55.013 "aliases": [ 00:17:55.013 "886de11e-afc8-4e75-b3cd-81fa91d9e53b" 00:17:55.013 ], 00:17:55.013 "product_name": "Raid Volume", 00:17:55.013 "block_size": 512, 00:17:55.013 "num_blocks": 190464, 00:17:55.013 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:55.013 "assigned_rate_limits": { 00:17:55.013 "rw_ios_per_sec": 0, 00:17:55.013 "rw_mbytes_per_sec": 0, 00:17:55.013 "r_mbytes_per_sec": 0, 00:17:55.013 "w_mbytes_per_sec": 0 00:17:55.013 }, 00:17:55.013 "claimed": false, 00:17:55.013 "zoned": false, 00:17:55.013 "supported_io_types": { 00:17:55.013 "read": true, 00:17:55.013 "write": true, 00:17:55.013 "unmap": false, 00:17:55.013 "flush": false, 00:17:55.013 "reset": true, 00:17:55.013 "nvme_admin": false, 00:17:55.013 "nvme_io": false, 00:17:55.013 "nvme_io_md": false, 00:17:55.013 "write_zeroes": true, 00:17:55.013 "zcopy": false, 00:17:55.013 "get_zone_info": false, 00:17:55.013 "zone_management": false, 00:17:55.013 "zone_append": false, 00:17:55.013 "compare": false, 00:17:55.013 "compare_and_write": false, 00:17:55.013 "abort": false, 00:17:55.013 "seek_hole": false, 00:17:55.013 "seek_data": false, 00:17:55.013 "copy": false, 00:17:55.013 "nvme_iov_md": false 00:17:55.013 }, 00:17:55.013 "driver_specific": { 00:17:55.013 "raid": { 00:17:55.013 "uuid": "886de11e-afc8-4e75-b3cd-81fa91d9e53b", 00:17:55.013 "strip_size_kb": 64, 00:17:55.013 "state": "online", 00:17:55.013 "raid_level": "raid5f", 00:17:55.013 "superblock": true, 00:17:55.013 "num_base_bdevs": 4, 00:17:55.013 "num_base_bdevs_discovered": 4, 00:17:55.013 "num_base_bdevs_operational": 4, 00:17:55.013 "base_bdevs_list": [ 00:17:55.013 { 00:17:55.013 "name": "NewBaseBdev", 00:17:55.013 "uuid": "abd6957c-f50b-4ef2-b037-b16432b0adb9", 00:17:55.013 "is_configured": true, 00:17:55.013 "data_offset": 2048, 00:17:55.013 "data_size": 63488 00:17:55.013 }, 00:17:55.013 { 00:17:55.013 "name": "BaseBdev2", 00:17:55.013 "uuid": "5a37c7a5-29af-43a5-9445-fea46a67223e", 00:17:55.013 "is_configured": true, 00:17:55.013 "data_offset": 2048, 00:17:55.013 "data_size": 63488 00:17:55.013 }, 00:17:55.013 { 00:17:55.013 "name": "BaseBdev3", 00:17:55.013 "uuid": "b2295e53-f385-45e4-98ff-ccaf4661d8e9", 00:17:55.013 "is_configured": true, 00:17:55.013 "data_offset": 2048, 00:17:55.013 "data_size": 63488 00:17:55.013 }, 00:17:55.013 { 00:17:55.013 "name": "BaseBdev4", 00:17:55.013 "uuid": "a811317b-35c5-495a-a361-70dadc14d510", 00:17:55.013 "is_configured": true, 00:17:55.013 "data_offset": 2048, 00:17:55.013 "data_size": 63488 00:17:55.013 } 00:17:55.013 ] 00:17:55.013 } 00:17:55.013 } 00:17:55.013 }' 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:55.013 BaseBdev2 00:17:55.013 BaseBdev3 00:17:55.013 BaseBdev4' 00:17:55.013 14:34:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.013 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:55.013 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.013 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:55.013 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.013 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.013 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.013 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.273 [2024-11-20 14:34:56.243759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.273 [2024-11-20 14:34:56.243819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.273 [2024-11-20 14:34:56.243915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.273 [2024-11-20 14:34:56.244419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.273 [2024-11-20 14:34:56.244438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83928 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83928 ']' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83928 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83928 00:17:55.273 killing process with pid 83928 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83928' 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83928 00:17:55.273 [2024-11-20 14:34:56.283479] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.273 14:34:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83928 00:17:55.840 [2024-11-20 14:34:56.657906] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.774 ************************************ 00:17:56.774 END TEST raid5f_state_function_test_sb 00:17:56.774 ************************************ 00:17:56.774 14:34:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:56.774 00:17:56.774 real 0m13.106s 00:17:56.774 user 0m21.556s 00:17:56.774 sys 0m1.909s 00:17:56.774 14:34:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:56.774 14:34:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.033 14:34:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:57.033 14:34:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:57.033 14:34:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.033 14:34:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.033 ************************************ 00:17:57.033 START TEST raid5f_superblock_test 00:17:57.033 ************************************ 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:57.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84610 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84610 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84610 ']' 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.033 14:34:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.033 [2024-11-20 14:34:57.942014] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:17:57.033 [2024-11-20 14:34:57.942193] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84610 ] 00:17:57.291 [2024-11-20 14:34:58.121075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.291 [2024-11-20 14:34:58.257669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.552 [2024-11-20 14:34:58.474409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.552 [2024-11-20 14:34:58.474464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 malloc1 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 [2024-11-20 14:34:58.985585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.119 [2024-11-20 14:34:58.985672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.119 [2024-11-20 14:34:58.985709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:58.119 [2024-11-20 14:34:58.985725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.119 [2024-11-20 14:34:58.988538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.119 [2024-11-20 14:34:58.988586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.119 pt1 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 malloc2 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 [2024-11-20 14:34:59.044597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.119 [2024-11-20 14:34:59.044682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.119 [2024-11-20 14:34:59.044722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:58.119 [2024-11-20 14:34:59.044739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.119 [2024-11-20 14:34:59.047807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.119 [2024-11-20 14:34:59.047882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.119 pt2 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 malloc3 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 [2024-11-20 14:34:59.114299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:58.119 [2024-11-20 14:34:59.114368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.119 [2024-11-20 14:34:59.114403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:58.119 [2024-11-20 14:34:59.114420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.119 [2024-11-20 14:34:59.117459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.119 [2024-11-20 14:34:59.117512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:58.119 pt3 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 malloc4 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.119 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.378 [2024-11-20 14:34:59.174880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:58.378 [2024-11-20 14:34:59.174957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.378 [2024-11-20 14:34:59.174991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:58.378 [2024-11-20 14:34:59.175008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.378 [2024-11-20 14:34:59.177954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.378 [2024-11-20 14:34:59.178233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:58.378 pt4 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.378 [2024-11-20 14:34:59.186960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.378 [2024-11-20 14:34:59.189968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.378 [2024-11-20 14:34:59.190289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:58.378 [2024-11-20 14:34:59.190415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:58.378 [2024-11-20 14:34:59.190823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:58.378 [2024-11-20 14:34:59.190962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:58.378 [2024-11-20 14:34:59.191350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:58.378 [2024-11-20 14:34:59.198899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:58.378 [2024-11-20 14:34:59.199064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:58.378 [2024-11-20 14:34:59.199466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.378 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.379 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.379 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.379 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.379 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.379 "name": "raid_bdev1", 00:17:58.379 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:17:58.379 "strip_size_kb": 64, 00:17:58.379 "state": "online", 00:17:58.379 "raid_level": "raid5f", 00:17:58.379 "superblock": true, 00:17:58.379 "num_base_bdevs": 4, 00:17:58.379 "num_base_bdevs_discovered": 4, 00:17:58.379 "num_base_bdevs_operational": 4, 00:17:58.379 "base_bdevs_list": [ 00:17:58.379 { 00:17:58.379 "name": "pt1", 00:17:58.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.379 "is_configured": true, 00:17:58.379 "data_offset": 2048, 00:17:58.379 "data_size": 63488 00:17:58.379 }, 00:17:58.379 { 00:17:58.379 "name": "pt2", 00:17:58.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.379 "is_configured": true, 00:17:58.379 "data_offset": 2048, 00:17:58.379 "data_size": 63488 00:17:58.379 }, 00:17:58.379 { 00:17:58.379 "name": "pt3", 00:17:58.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.379 "is_configured": true, 00:17:58.379 "data_offset": 2048, 00:17:58.379 "data_size": 63488 00:17:58.379 }, 00:17:58.379 { 00:17:58.379 "name": "pt4", 00:17:58.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.379 "is_configured": true, 00:17:58.379 "data_offset": 2048, 00:17:58.379 "data_size": 63488 00:17:58.379 } 00:17:58.379 ] 00:17:58.379 }' 00:17:58.379 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.379 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.690 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:58.690 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.691 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:58.691 [2024-11-20 14:34:59.731433] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.949 "name": "raid_bdev1", 00:17:58.949 "aliases": [ 00:17:58.949 "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd" 00:17:58.949 ], 00:17:58.949 "product_name": "Raid Volume", 00:17:58.949 "block_size": 512, 00:17:58.949 "num_blocks": 190464, 00:17:58.949 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:17:58.949 "assigned_rate_limits": { 00:17:58.949 "rw_ios_per_sec": 0, 00:17:58.949 "rw_mbytes_per_sec": 0, 00:17:58.949 "r_mbytes_per_sec": 0, 00:17:58.949 "w_mbytes_per_sec": 0 00:17:58.949 }, 00:17:58.949 "claimed": false, 00:17:58.949 "zoned": false, 00:17:58.949 "supported_io_types": { 00:17:58.949 "read": true, 00:17:58.949 "write": true, 00:17:58.949 "unmap": false, 00:17:58.949 "flush": false, 00:17:58.949 "reset": true, 00:17:58.949 "nvme_admin": false, 00:17:58.949 "nvme_io": false, 00:17:58.949 "nvme_io_md": false, 00:17:58.949 "write_zeroes": true, 00:17:58.949 "zcopy": false, 00:17:58.949 "get_zone_info": false, 00:17:58.949 "zone_management": false, 00:17:58.949 "zone_append": false, 00:17:58.949 "compare": false, 00:17:58.949 "compare_and_write": false, 00:17:58.949 "abort": false, 00:17:58.949 "seek_hole": false, 00:17:58.949 "seek_data": false, 00:17:58.949 "copy": false, 00:17:58.949 "nvme_iov_md": false 00:17:58.949 }, 00:17:58.949 "driver_specific": { 00:17:58.949 "raid": { 00:17:58.949 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:17:58.949 "strip_size_kb": 64, 00:17:58.949 "state": "online", 00:17:58.949 "raid_level": "raid5f", 00:17:58.949 "superblock": true, 00:17:58.949 "num_base_bdevs": 4, 00:17:58.949 "num_base_bdevs_discovered": 4, 00:17:58.949 "num_base_bdevs_operational": 4, 00:17:58.949 "base_bdevs_list": [ 00:17:58.949 { 00:17:58.949 "name": "pt1", 00:17:58.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:58.949 "is_configured": true, 00:17:58.949 "data_offset": 2048, 00:17:58.949 "data_size": 63488 00:17:58.949 }, 00:17:58.949 { 00:17:58.949 "name": "pt2", 00:17:58.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.949 "is_configured": true, 00:17:58.949 "data_offset": 2048, 00:17:58.949 "data_size": 63488 00:17:58.949 }, 00:17:58.949 { 00:17:58.949 "name": "pt3", 00:17:58.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:58.949 "is_configured": true, 00:17:58.949 "data_offset": 2048, 00:17:58.949 "data_size": 63488 00:17:58.949 }, 00:17:58.949 { 00:17:58.949 "name": "pt4", 00:17:58.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:58.949 "is_configured": true, 00:17:58.949 "data_offset": 2048, 00:17:58.949 "data_size": 63488 00:17:58.949 } 00:17:58.949 ] 00:17:58.949 } 00:17:58.949 } 00:17:58.949 }' 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:58.949 pt2 00:17:58.949 pt3 00:17:58.949 pt4' 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.949 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.950 14:34:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.950 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.208 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:59.209 [2024-11-20 14:35:00.111559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd ']' 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 [2024-11-20 14:35:00.159288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.209 [2024-11-20 14:35:00.159541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.209 [2024-11-20 14:35:00.159699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.209 [2024-11-20 14:35:00.159830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.209 [2024-11-20 14:35:00.159858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.209 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.468 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.468 [2024-11-20 14:35:00.343393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:59.468 [2024-11-20 14:35:00.346396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:59.468 [2024-11-20 14:35:00.346467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:59.468 [2024-11-20 14:35:00.346537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:59.468 [2024-11-20 14:35:00.346619] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:59.468 [2024-11-20 14:35:00.346768] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:59.468 [2024-11-20 14:35:00.346822] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:59.469 [2024-11-20 14:35:00.346858] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:59.469 [2024-11-20 14:35:00.346885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.469 [2024-11-20 14:35:00.346902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:59.469 request: 00:17:59.469 { 00:17:59.469 "name": "raid_bdev1", 00:17:59.469 "raid_level": "raid5f", 00:17:59.469 "base_bdevs": [ 00:17:59.469 "malloc1", 00:17:59.469 "malloc2", 00:17:59.469 "malloc3", 00:17:59.469 "malloc4" 00:17:59.469 ], 00:17:59.469 "strip_size_kb": 64, 00:17:59.469 "superblock": false, 00:17:59.469 "method": "bdev_raid_create", 00:17:59.469 "req_id": 1 00:17:59.469 } 00:17:59.469 Got JSON-RPC error response 00:17:59.469 response: 00:17:59.469 { 00:17:59.469 "code": -17, 00:17:59.469 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:59.469 } 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.469 [2024-11-20 14:35:00.411397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:59.469 [2024-11-20 14:35:00.411666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.469 [2024-11-20 14:35:00.411703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:59.469 [2024-11-20 14:35:00.411722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.469 [2024-11-20 14:35:00.415004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.469 [2024-11-20 14:35:00.415113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:59.469 [2024-11-20 14:35:00.415250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:59.469 [2024-11-20 14:35:00.415321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.469 pt1 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.469 "name": "raid_bdev1", 00:17:59.469 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:17:59.469 "strip_size_kb": 64, 00:17:59.469 "state": "configuring", 00:17:59.469 "raid_level": "raid5f", 00:17:59.469 "superblock": true, 00:17:59.469 "num_base_bdevs": 4, 00:17:59.469 "num_base_bdevs_discovered": 1, 00:17:59.469 "num_base_bdevs_operational": 4, 00:17:59.469 "base_bdevs_list": [ 00:17:59.469 { 00:17:59.469 "name": "pt1", 00:17:59.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.469 "is_configured": true, 00:17:59.469 "data_offset": 2048, 00:17:59.469 "data_size": 63488 00:17:59.469 }, 00:17:59.469 { 00:17:59.469 "name": null, 00:17:59.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.469 "is_configured": false, 00:17:59.469 "data_offset": 2048, 00:17:59.469 "data_size": 63488 00:17:59.469 }, 00:17:59.469 { 00:17:59.469 "name": null, 00:17:59.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.469 "is_configured": false, 00:17:59.469 "data_offset": 2048, 00:17:59.469 "data_size": 63488 00:17:59.469 }, 00:17:59.469 { 00:17:59.469 "name": null, 00:17:59.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.469 "is_configured": false, 00:17:59.469 "data_offset": 2048, 00:17:59.469 "data_size": 63488 00:17:59.469 } 00:17:59.469 ] 00:17:59.469 }' 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.469 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.036 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:00.036 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.036 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.036 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.036 [2024-11-20 14:35:00.939766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.036 [2024-11-20 14:35:00.939880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.036 [2024-11-20 14:35:00.939913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:00.036 [2024-11-20 14:35:00.939932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.036 [2024-11-20 14:35:00.940563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.036 [2024-11-20 14:35:00.940598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.036 [2024-11-20 14:35:00.940754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.036 [2024-11-20 14:35:00.940797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.036 pt2 00:18:00.036 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.036 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.036 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.037 [2024-11-20 14:35:00.947746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.037 14:35:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.037 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.037 "name": "raid_bdev1", 00:18:00.037 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:00.037 "strip_size_kb": 64, 00:18:00.037 "state": "configuring", 00:18:00.037 "raid_level": "raid5f", 00:18:00.037 "superblock": true, 00:18:00.037 "num_base_bdevs": 4, 00:18:00.037 "num_base_bdevs_discovered": 1, 00:18:00.037 "num_base_bdevs_operational": 4, 00:18:00.037 "base_bdevs_list": [ 00:18:00.037 { 00:18:00.037 "name": "pt1", 00:18:00.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.037 "is_configured": true, 00:18:00.037 "data_offset": 2048, 00:18:00.037 "data_size": 63488 00:18:00.037 }, 00:18:00.037 { 00:18:00.037 "name": null, 00:18:00.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.037 "is_configured": false, 00:18:00.037 "data_offset": 0, 00:18:00.037 "data_size": 63488 00:18:00.037 }, 00:18:00.037 { 00:18:00.037 "name": null, 00:18:00.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.037 "is_configured": false, 00:18:00.037 "data_offset": 2048, 00:18:00.037 "data_size": 63488 00:18:00.037 }, 00:18:00.037 { 00:18:00.037 "name": null, 00:18:00.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.037 "is_configured": false, 00:18:00.037 "data_offset": 2048, 00:18:00.037 "data_size": 63488 00:18:00.037 } 00:18:00.037 ] 00:18:00.037 }' 00:18:00.037 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.037 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.604 [2024-11-20 14:35:01.439917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.604 [2024-11-20 14:35:01.440213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.604 [2024-11-20 14:35:01.440256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:00.604 [2024-11-20 14:35:01.440273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.604 [2024-11-20 14:35:01.440923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.604 [2024-11-20 14:35:01.440950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.604 [2024-11-20 14:35:01.441064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.604 [2024-11-20 14:35:01.441113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.604 pt2 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.604 [2024-11-20 14:35:01.447859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:00.604 [2024-11-20 14:35:01.447933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.604 [2024-11-20 14:35:01.447998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:00.604 [2024-11-20 14:35:01.448028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.604 [2024-11-20 14:35:01.448452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.604 [2024-11-20 14:35:01.448482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:00.604 [2024-11-20 14:35:01.448588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:00.604 [2024-11-20 14:35:01.448622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:00.604 pt3 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.604 [2024-11-20 14:35:01.455833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:00.604 [2024-11-20 14:35:01.455916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.604 [2024-11-20 14:35:01.455944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:00.604 [2024-11-20 14:35:01.455958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.604 [2024-11-20 14:35:01.456519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.604 [2024-11-20 14:35:01.456582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:00.604 [2024-11-20 14:35:01.456680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:00.604 [2024-11-20 14:35:01.456715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:00.604 [2024-11-20 14:35:01.456915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:00.604 [2024-11-20 14:35:01.456932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:00.604 [2024-11-20 14:35:01.457266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:00.604 [2024-11-20 14:35:01.464060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:00.604 [2024-11-20 14:35:01.464091] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:00.604 [2024-11-20 14:35:01.464306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.604 pt4 00:18:00.604 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.605 "name": "raid_bdev1", 00:18:00.605 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:00.605 "strip_size_kb": 64, 00:18:00.605 "state": "online", 00:18:00.605 "raid_level": "raid5f", 00:18:00.605 "superblock": true, 00:18:00.605 "num_base_bdevs": 4, 00:18:00.605 "num_base_bdevs_discovered": 4, 00:18:00.605 "num_base_bdevs_operational": 4, 00:18:00.605 "base_bdevs_list": [ 00:18:00.605 { 00:18:00.605 "name": "pt1", 00:18:00.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.605 "is_configured": true, 00:18:00.605 "data_offset": 2048, 00:18:00.605 "data_size": 63488 00:18:00.605 }, 00:18:00.605 { 00:18:00.605 "name": "pt2", 00:18:00.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.605 "is_configured": true, 00:18:00.605 "data_offset": 2048, 00:18:00.605 "data_size": 63488 00:18:00.605 }, 00:18:00.605 { 00:18:00.605 "name": "pt3", 00:18:00.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.605 "is_configured": true, 00:18:00.605 "data_offset": 2048, 00:18:00.605 "data_size": 63488 00:18:00.605 }, 00:18:00.605 { 00:18:00.605 "name": "pt4", 00:18:00.605 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.605 "is_configured": true, 00:18:00.605 "data_offset": 2048, 00:18:00.605 "data_size": 63488 00:18:00.605 } 00:18:00.605 ] 00:18:00.605 }' 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.605 14:35:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.174 [2024-11-20 14:35:02.016688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.174 "name": "raid_bdev1", 00:18:01.174 "aliases": [ 00:18:01.174 "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd" 00:18:01.174 ], 00:18:01.174 "product_name": "Raid Volume", 00:18:01.174 "block_size": 512, 00:18:01.174 "num_blocks": 190464, 00:18:01.174 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:01.174 "assigned_rate_limits": { 00:18:01.174 "rw_ios_per_sec": 0, 00:18:01.174 "rw_mbytes_per_sec": 0, 00:18:01.174 "r_mbytes_per_sec": 0, 00:18:01.174 "w_mbytes_per_sec": 0 00:18:01.174 }, 00:18:01.174 "claimed": false, 00:18:01.174 "zoned": false, 00:18:01.174 "supported_io_types": { 00:18:01.174 "read": true, 00:18:01.174 "write": true, 00:18:01.174 "unmap": false, 00:18:01.174 "flush": false, 00:18:01.174 "reset": true, 00:18:01.174 "nvme_admin": false, 00:18:01.174 "nvme_io": false, 00:18:01.174 "nvme_io_md": false, 00:18:01.174 "write_zeroes": true, 00:18:01.174 "zcopy": false, 00:18:01.174 "get_zone_info": false, 00:18:01.174 "zone_management": false, 00:18:01.174 "zone_append": false, 00:18:01.174 "compare": false, 00:18:01.174 "compare_and_write": false, 00:18:01.174 "abort": false, 00:18:01.174 "seek_hole": false, 00:18:01.174 "seek_data": false, 00:18:01.174 "copy": false, 00:18:01.174 "nvme_iov_md": false 00:18:01.174 }, 00:18:01.174 "driver_specific": { 00:18:01.174 "raid": { 00:18:01.174 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:01.174 "strip_size_kb": 64, 00:18:01.174 "state": "online", 00:18:01.174 "raid_level": "raid5f", 00:18:01.174 "superblock": true, 00:18:01.174 "num_base_bdevs": 4, 00:18:01.174 "num_base_bdevs_discovered": 4, 00:18:01.174 "num_base_bdevs_operational": 4, 00:18:01.174 "base_bdevs_list": [ 00:18:01.174 { 00:18:01.174 "name": "pt1", 00:18:01.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.174 "is_configured": true, 00:18:01.174 "data_offset": 2048, 00:18:01.174 "data_size": 63488 00:18:01.174 }, 00:18:01.174 { 00:18:01.174 "name": "pt2", 00:18:01.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.174 "is_configured": true, 00:18:01.174 "data_offset": 2048, 00:18:01.174 "data_size": 63488 00:18:01.174 }, 00:18:01.174 { 00:18:01.174 "name": "pt3", 00:18:01.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.174 "is_configured": true, 00:18:01.174 "data_offset": 2048, 00:18:01.174 "data_size": 63488 00:18:01.174 }, 00:18:01.174 { 00:18:01.174 "name": "pt4", 00:18:01.174 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.174 "is_configured": true, 00:18:01.174 "data_offset": 2048, 00:18:01.174 "data_size": 63488 00:18:01.174 } 00:18:01.174 ] 00:18:01.174 } 00:18:01.174 } 00:18:01.174 }' 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.174 pt2 00:18:01.174 pt3 00:18:01.174 pt4' 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:01.174 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.434 [2024-11-20 14:35:02.408808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd '!=' ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd ']' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.434 [2024-11-20 14:35:02.460574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.434 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.694 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.694 "name": "raid_bdev1", 00:18:01.694 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:01.694 "strip_size_kb": 64, 00:18:01.694 "state": "online", 00:18:01.694 "raid_level": "raid5f", 00:18:01.694 "superblock": true, 00:18:01.694 "num_base_bdevs": 4, 00:18:01.694 "num_base_bdevs_discovered": 3, 00:18:01.694 "num_base_bdevs_operational": 3, 00:18:01.694 "base_bdevs_list": [ 00:18:01.694 { 00:18:01.694 "name": null, 00:18:01.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.694 "is_configured": false, 00:18:01.694 "data_offset": 0, 00:18:01.694 "data_size": 63488 00:18:01.694 }, 00:18:01.694 { 00:18:01.694 "name": "pt2", 00:18:01.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.694 "is_configured": true, 00:18:01.694 "data_offset": 2048, 00:18:01.694 "data_size": 63488 00:18:01.694 }, 00:18:01.694 { 00:18:01.694 "name": "pt3", 00:18:01.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.694 "is_configured": true, 00:18:01.694 "data_offset": 2048, 00:18:01.694 "data_size": 63488 00:18:01.694 }, 00:18:01.694 { 00:18:01.694 "name": "pt4", 00:18:01.694 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.694 "is_configured": true, 00:18:01.694 "data_offset": 2048, 00:18:01.694 "data_size": 63488 00:18:01.694 } 00:18:01.694 ] 00:18:01.694 }' 00:18:01.694 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.694 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.953 [2024-11-20 14:35:02.992736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.953 [2024-11-20 14:35:02.992816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.953 [2024-11-20 14:35:02.992950] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.953 [2024-11-20 14:35:02.993115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.953 [2024-11-20 14:35:02.993165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.953 14:35:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 [2024-11-20 14:35:03.084689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.211 [2024-11-20 14:35:03.084799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.211 [2024-11-20 14:35:03.084834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:02.211 [2024-11-20 14:35:03.084849] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.211 [2024-11-20 14:35:03.088201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.211 [2024-11-20 14:35:03.088240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.211 [2024-11-20 14:35:03.088386] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.211 [2024-11-20 14:35:03.088446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.211 pt2 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.211 "name": "raid_bdev1", 00:18:02.211 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:02.211 "strip_size_kb": 64, 00:18:02.211 "state": "configuring", 00:18:02.211 "raid_level": "raid5f", 00:18:02.211 "superblock": true, 00:18:02.211 "num_base_bdevs": 4, 00:18:02.211 "num_base_bdevs_discovered": 1, 00:18:02.211 "num_base_bdevs_operational": 3, 00:18:02.211 "base_bdevs_list": [ 00:18:02.211 { 00:18:02.211 "name": null, 00:18:02.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.211 "is_configured": false, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 }, 00:18:02.211 { 00:18:02.211 "name": "pt2", 00:18:02.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.211 "is_configured": true, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 }, 00:18:02.211 { 00:18:02.211 "name": null, 00:18:02.211 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.211 "is_configured": false, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 }, 00:18:02.211 { 00:18:02.211 "name": null, 00:18:02.211 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.211 "is_configured": false, 00:18:02.211 "data_offset": 2048, 00:18:02.211 "data_size": 63488 00:18:02.211 } 00:18:02.211 ] 00:18:02.211 }' 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.211 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:02.776 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:02.776 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:02.776 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.776 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 [2024-11-20 14:35:03.633087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:02.776 [2024-11-20 14:35:03.633255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.776 [2024-11-20 14:35:03.633345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:02.776 [2024-11-20 14:35:03.633373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.776 [2024-11-20 14:35:03.634350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.776 [2024-11-20 14:35:03.634416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:02.776 [2024-11-20 14:35:03.634602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:02.776 [2024-11-20 14:35:03.634722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:02.776 pt3 00:18:02.776 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.776 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.777 "name": "raid_bdev1", 00:18:02.777 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:02.777 "strip_size_kb": 64, 00:18:02.777 "state": "configuring", 00:18:02.777 "raid_level": "raid5f", 00:18:02.777 "superblock": true, 00:18:02.777 "num_base_bdevs": 4, 00:18:02.777 "num_base_bdevs_discovered": 2, 00:18:02.777 "num_base_bdevs_operational": 3, 00:18:02.777 "base_bdevs_list": [ 00:18:02.777 { 00:18:02.777 "name": null, 00:18:02.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.777 "is_configured": false, 00:18:02.777 "data_offset": 2048, 00:18:02.777 "data_size": 63488 00:18:02.777 }, 00:18:02.777 { 00:18:02.777 "name": "pt2", 00:18:02.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.777 "is_configured": true, 00:18:02.777 "data_offset": 2048, 00:18:02.777 "data_size": 63488 00:18:02.777 }, 00:18:02.777 { 00:18:02.777 "name": "pt3", 00:18:02.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.777 "is_configured": true, 00:18:02.777 "data_offset": 2048, 00:18:02.777 "data_size": 63488 00:18:02.777 }, 00:18:02.777 { 00:18:02.777 "name": null, 00:18:02.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.777 "is_configured": false, 00:18:02.777 "data_offset": 2048, 00:18:02.777 "data_size": 63488 00:18:02.777 } 00:18:02.777 ] 00:18:02.777 }' 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.777 14:35:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.344 [2024-11-20 14:35:04.177293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:03.344 [2024-11-20 14:35:04.177414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.344 [2024-11-20 14:35:04.177454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:03.344 [2024-11-20 14:35:04.177471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.344 [2024-11-20 14:35:04.178262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.344 [2024-11-20 14:35:04.178296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:03.344 [2024-11-20 14:35:04.178411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:03.344 [2024-11-20 14:35:04.178456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:03.344 [2024-11-20 14:35:04.178712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:03.344 [2024-11-20 14:35:04.178730] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:03.344 [2024-11-20 14:35:04.179092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:03.344 [2024-11-20 14:35:04.186778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:03.344 [2024-11-20 14:35:04.186814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:03.344 [2024-11-20 14:35:04.187213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.344 pt4 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.344 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.344 "name": "raid_bdev1", 00:18:03.344 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:03.344 "strip_size_kb": 64, 00:18:03.344 "state": "online", 00:18:03.344 "raid_level": "raid5f", 00:18:03.344 "superblock": true, 00:18:03.344 "num_base_bdevs": 4, 00:18:03.344 "num_base_bdevs_discovered": 3, 00:18:03.344 "num_base_bdevs_operational": 3, 00:18:03.344 "base_bdevs_list": [ 00:18:03.344 { 00:18:03.344 "name": null, 00:18:03.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.344 "is_configured": false, 00:18:03.344 "data_offset": 2048, 00:18:03.344 "data_size": 63488 00:18:03.344 }, 00:18:03.344 { 00:18:03.344 "name": "pt2", 00:18:03.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.344 "is_configured": true, 00:18:03.344 "data_offset": 2048, 00:18:03.344 "data_size": 63488 00:18:03.344 }, 00:18:03.344 { 00:18:03.344 "name": "pt3", 00:18:03.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.344 "is_configured": true, 00:18:03.344 "data_offset": 2048, 00:18:03.344 "data_size": 63488 00:18:03.344 }, 00:18:03.344 { 00:18:03.344 "name": "pt4", 00:18:03.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.344 "is_configured": true, 00:18:03.344 "data_offset": 2048, 00:18:03.345 "data_size": 63488 00:18:03.345 } 00:18:03.345 ] 00:18:03.345 }' 00:18:03.345 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.345 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.993 [2024-11-20 14:35:04.727441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.993 [2024-11-20 14:35:04.727478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.993 [2024-11-20 14:35:04.727584] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.993 [2024-11-20 14:35:04.727718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.993 [2024-11-20 14:35:04.727757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.993 [2024-11-20 14:35:04.807423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.993 [2024-11-20 14:35:04.807519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.993 [2024-11-20 14:35:04.807558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:03.993 [2024-11-20 14:35:04.807580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.993 [2024-11-20 14:35:04.811069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.993 [2024-11-20 14:35:04.811130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.993 [2024-11-20 14:35:04.811245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:03.993 [2024-11-20 14:35:04.811360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.993 [2024-11-20 14:35:04.811536] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:03.993 [2024-11-20 14:35:04.811561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.993 [2024-11-20 14:35:04.811584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:03.993 [2024-11-20 14:35:04.811660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.993 [2024-11-20 14:35:04.811945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.993 pt1 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.993 "name": "raid_bdev1", 00:18:03.993 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:03.993 "strip_size_kb": 64, 00:18:03.993 "state": "configuring", 00:18:03.993 "raid_level": "raid5f", 00:18:03.993 "superblock": true, 00:18:03.993 "num_base_bdevs": 4, 00:18:03.993 "num_base_bdevs_discovered": 2, 00:18:03.993 "num_base_bdevs_operational": 3, 00:18:03.993 "base_bdevs_list": [ 00:18:03.993 { 00:18:03.993 "name": null, 00:18:03.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.993 "is_configured": false, 00:18:03.993 "data_offset": 2048, 00:18:03.993 "data_size": 63488 00:18:03.993 }, 00:18:03.993 { 00:18:03.993 "name": "pt2", 00:18:03.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.993 "is_configured": true, 00:18:03.993 "data_offset": 2048, 00:18:03.993 "data_size": 63488 00:18:03.993 }, 00:18:03.993 { 00:18:03.993 "name": "pt3", 00:18:03.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.993 "is_configured": true, 00:18:03.993 "data_offset": 2048, 00:18:03.993 "data_size": 63488 00:18:03.993 }, 00:18:03.993 { 00:18:03.993 "name": null, 00:18:03.993 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.993 "is_configured": false, 00:18:03.993 "data_offset": 2048, 00:18:03.993 "data_size": 63488 00:18:03.993 } 00:18:03.993 ] 00:18:03.993 }' 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.993 14:35:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.563 [2024-11-20 14:35:05.383906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:04.563 [2024-11-20 14:35:05.384070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.563 [2024-11-20 14:35:05.384132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:04.563 [2024-11-20 14:35:05.384147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.563 [2024-11-20 14:35:05.384810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.563 [2024-11-20 14:35:05.384837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:04.563 [2024-11-20 14:35:05.384965] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:04.563 [2024-11-20 14:35:05.385002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:04.563 [2024-11-20 14:35:05.385213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:04.563 [2024-11-20 14:35:05.385231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:04.563 [2024-11-20 14:35:05.385559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:04.563 [2024-11-20 14:35:05.392554] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:04.563 [2024-11-20 14:35:05.392603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:04.563 [2024-11-20 14:35:05.393004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.563 pt4 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.563 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.563 "name": "raid_bdev1", 00:18:04.563 "uuid": "ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd", 00:18:04.563 "strip_size_kb": 64, 00:18:04.563 "state": "online", 00:18:04.563 "raid_level": "raid5f", 00:18:04.563 "superblock": true, 00:18:04.563 "num_base_bdevs": 4, 00:18:04.563 "num_base_bdevs_discovered": 3, 00:18:04.563 "num_base_bdevs_operational": 3, 00:18:04.563 "base_bdevs_list": [ 00:18:04.563 { 00:18:04.563 "name": null, 00:18:04.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.563 "is_configured": false, 00:18:04.563 "data_offset": 2048, 00:18:04.563 "data_size": 63488 00:18:04.563 }, 00:18:04.563 { 00:18:04.563 "name": "pt2", 00:18:04.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.563 "is_configured": true, 00:18:04.563 "data_offset": 2048, 00:18:04.563 "data_size": 63488 00:18:04.563 }, 00:18:04.563 { 00:18:04.563 "name": "pt3", 00:18:04.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.563 "is_configured": true, 00:18:04.563 "data_offset": 2048, 00:18:04.563 "data_size": 63488 00:18:04.563 }, 00:18:04.563 { 00:18:04.563 "name": "pt4", 00:18:04.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.564 "is_configured": true, 00:18:04.564 "data_offset": 2048, 00:18:04.564 "data_size": 63488 00:18:04.564 } 00:18:04.564 ] 00:18:04.564 }' 00:18:04.564 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.564 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.130 14:35:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.130 [2024-11-20 14:35:05.989135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd '!=' ab8edb9f-0c3e-41ca-8fa2-24fde9c77ebd ']' 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84610 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84610 ']' 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84610 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84610 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.130 killing process with pid 84610 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84610' 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84610 00:18:05.130 [2024-11-20 14:35:06.069206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.130 14:35:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84610 00:18:05.130 [2024-11-20 14:35:06.069331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.130 [2024-11-20 14:35:06.069430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.130 [2024-11-20 14:35:06.069451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:05.389 [2024-11-20 14:35:06.403688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.764 14:35:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:06.764 00:18:06.764 real 0m9.615s 00:18:06.764 user 0m15.735s 00:18:06.764 sys 0m1.439s 00:18:06.764 14:35:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.764 14:35:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.764 ************************************ 00:18:06.764 END TEST raid5f_superblock_test 00:18:06.764 ************************************ 00:18:06.764 14:35:07 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:06.764 14:35:07 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:06.764 14:35:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:06.764 14:35:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.764 14:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.764 ************************************ 00:18:06.764 START TEST raid5f_rebuild_test 00:18:06.764 ************************************ 00:18:06.764 14:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:06.764 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:06.764 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:06.764 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:06.764 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:06.764 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85101 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85101 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:06.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85101 ']' 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.765 14:35:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.765 [2024-11-20 14:35:07.634949] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:18:06.765 [2024-11-20 14:35:07.635398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85101 ] 00:18:06.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:06.765 Zero copy mechanism will not be used. 00:18:07.023 [2024-11-20 14:35:07.825312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.023 [2024-11-20 14:35:07.958401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.282 [2024-11-20 14:35:08.158706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.282 [2024-11-20 14:35:08.158759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 BaseBdev1_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 [2024-11-20 14:35:08.678906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:07.850 [2024-11-20 14:35:08.678976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.850 [2024-11-20 14:35:08.679010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:07.850 [2024-11-20 14:35:08.679030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.850 [2024-11-20 14:35:08.681831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.850 [2024-11-20 14:35:08.681882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.850 BaseBdev1 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 BaseBdev2_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 [2024-11-20 14:35:08.728780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:07.850 [2024-11-20 14:35:08.728855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.850 [2024-11-20 14:35:08.728889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:07.850 [2024-11-20 14:35:08.728907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.850 [2024-11-20 14:35:08.732001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.850 [2024-11-20 14:35:08.732063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.850 BaseBdev2 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 BaseBdev3_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 [2024-11-20 14:35:08.788283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:07.850 [2024-11-20 14:35:08.788349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.850 [2024-11-20 14:35:08.788381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:07.850 [2024-11-20 14:35:08.788400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.850 [2024-11-20 14:35:08.791471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.850 [2024-11-20 14:35:08.791520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:07.850 BaseBdev3 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 BaseBdev4_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 [2024-11-20 14:35:08.841515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:07.850 [2024-11-20 14:35:08.841602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.850 [2024-11-20 14:35:08.841647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:07.850 [2024-11-20 14:35:08.841698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.850 [2024-11-20 14:35:08.844595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.850 [2024-11-20 14:35:08.844692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:07.850 BaseBdev4 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 spare_malloc 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 spare_delay 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.850 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.850 [2024-11-20 14:35:08.903192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:07.850 [2024-11-20 14:35:08.903271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.850 [2024-11-20 14:35:08.903298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:07.850 [2024-11-20 14:35:08.903316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.109 [2024-11-20 14:35:08.906524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.109 [2024-11-20 14:35:08.906572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.109 spare 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.109 [2024-11-20 14:35:08.915218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.109 [2024-11-20 14:35:08.917848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.109 [2024-11-20 14:35:08.917958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.109 [2024-11-20 14:35:08.918076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:08.109 [2024-11-20 14:35:08.918235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:08.109 [2024-11-20 14:35:08.918257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:08.109 [2024-11-20 14:35:08.918581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:08.109 [2024-11-20 14:35:08.925542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:08.109 [2024-11-20 14:35:08.925572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:08.109 [2024-11-20 14:35:08.925884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.109 "name": "raid_bdev1", 00:18:08.109 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:08.109 "strip_size_kb": 64, 00:18:08.109 "state": "online", 00:18:08.109 "raid_level": "raid5f", 00:18:08.109 "superblock": false, 00:18:08.109 "num_base_bdevs": 4, 00:18:08.109 "num_base_bdevs_discovered": 4, 00:18:08.109 "num_base_bdevs_operational": 4, 00:18:08.109 "base_bdevs_list": [ 00:18:08.109 { 00:18:08.109 "name": "BaseBdev1", 00:18:08.109 "uuid": "5d69bcce-5fa5-54ee-a686-acfc953f726d", 00:18:08.109 "is_configured": true, 00:18:08.109 "data_offset": 0, 00:18:08.109 "data_size": 65536 00:18:08.109 }, 00:18:08.109 { 00:18:08.109 "name": "BaseBdev2", 00:18:08.109 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:08.109 "is_configured": true, 00:18:08.109 "data_offset": 0, 00:18:08.109 "data_size": 65536 00:18:08.109 }, 00:18:08.109 { 00:18:08.109 "name": "BaseBdev3", 00:18:08.109 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:08.109 "is_configured": true, 00:18:08.109 "data_offset": 0, 00:18:08.109 "data_size": 65536 00:18:08.109 }, 00:18:08.109 { 00:18:08.109 "name": "BaseBdev4", 00:18:08.109 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:08.109 "is_configured": true, 00:18:08.109 "data_offset": 0, 00:18:08.109 "data_size": 65536 00:18:08.109 } 00:18:08.109 ] 00:18:08.109 }' 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.109 14:35:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.676 [2024-11-20 14:35:09.433985] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:08.676 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:08.677 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:08.677 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:08.935 [2024-11-20 14:35:09.793845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:08.935 /dev/nbd0 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.935 1+0 records in 00:18:08.935 1+0 records out 00:18:08.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199129 s, 20.6 MB/s 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:08.935 14:35:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:09.501 512+0 records in 00:18:09.501 512+0 records out 00:18:09.501 100663296 bytes (101 MB, 96 MiB) copied, 0.656583 s, 153 MB/s 00:18:09.501 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:09.501 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.501 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:09.501 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:09.501 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:09.501 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.501 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:09.760 [2024-11-20 14:35:10.800430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.018 [2024-11-20 14:35:10.840154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.018 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.018 "name": "raid_bdev1", 00:18:10.018 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:10.018 "strip_size_kb": 64, 00:18:10.018 "state": "online", 00:18:10.018 "raid_level": "raid5f", 00:18:10.018 "superblock": false, 00:18:10.018 "num_base_bdevs": 4, 00:18:10.018 "num_base_bdevs_discovered": 3, 00:18:10.018 "num_base_bdevs_operational": 3, 00:18:10.018 "base_bdevs_list": [ 00:18:10.018 { 00:18:10.018 "name": null, 00:18:10.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.018 "is_configured": false, 00:18:10.018 "data_offset": 0, 00:18:10.018 "data_size": 65536 00:18:10.018 }, 00:18:10.018 { 00:18:10.018 "name": "BaseBdev2", 00:18:10.018 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:10.019 "is_configured": true, 00:18:10.019 "data_offset": 0, 00:18:10.019 "data_size": 65536 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "name": "BaseBdev3", 00:18:10.019 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:10.019 "is_configured": true, 00:18:10.019 "data_offset": 0, 00:18:10.019 "data_size": 65536 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "name": "BaseBdev4", 00:18:10.019 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:10.019 "is_configured": true, 00:18:10.019 "data_offset": 0, 00:18:10.019 "data_size": 65536 00:18:10.019 } 00:18:10.019 ] 00:18:10.019 }' 00:18:10.019 14:35:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.019 14:35:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.586 14:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.586 14:35:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.586 14:35:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.586 [2024-11-20 14:35:11.336271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.586 [2024-11-20 14:35:11.351720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:10.586 14:35:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.586 14:35:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:10.586 [2024-11-20 14:35:11.361467] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.520 "name": "raid_bdev1", 00:18:11.520 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:11.520 "strip_size_kb": 64, 00:18:11.520 "state": "online", 00:18:11.520 "raid_level": "raid5f", 00:18:11.520 "superblock": false, 00:18:11.520 "num_base_bdevs": 4, 00:18:11.520 "num_base_bdevs_discovered": 4, 00:18:11.520 "num_base_bdevs_operational": 4, 00:18:11.520 "process": { 00:18:11.520 "type": "rebuild", 00:18:11.520 "target": "spare", 00:18:11.520 "progress": { 00:18:11.520 "blocks": 17280, 00:18:11.520 "percent": 8 00:18:11.520 } 00:18:11.520 }, 00:18:11.520 "base_bdevs_list": [ 00:18:11.520 { 00:18:11.520 "name": "spare", 00:18:11.520 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:11.520 "is_configured": true, 00:18:11.520 "data_offset": 0, 00:18:11.520 "data_size": 65536 00:18:11.520 }, 00:18:11.520 { 00:18:11.520 "name": "BaseBdev2", 00:18:11.520 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:11.520 "is_configured": true, 00:18:11.520 "data_offset": 0, 00:18:11.520 "data_size": 65536 00:18:11.520 }, 00:18:11.520 { 00:18:11.520 "name": "BaseBdev3", 00:18:11.520 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:11.520 "is_configured": true, 00:18:11.520 "data_offset": 0, 00:18:11.520 "data_size": 65536 00:18:11.520 }, 00:18:11.520 { 00:18:11.520 "name": "BaseBdev4", 00:18:11.520 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:11.520 "is_configured": true, 00:18:11.520 "data_offset": 0, 00:18:11.520 "data_size": 65536 00:18:11.520 } 00:18:11.520 ] 00:18:11.520 }' 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.520 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.520 [2024-11-20 14:35:12.510975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.520 [2024-11-20 14:35:12.573705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.520 [2024-11-20 14:35:12.573794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.520 [2024-11-20 14:35:12.573823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.520 [2024-11-20 14:35:12.573840] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.778 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.779 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.779 "name": "raid_bdev1", 00:18:11.779 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:11.779 "strip_size_kb": 64, 00:18:11.779 "state": "online", 00:18:11.779 "raid_level": "raid5f", 00:18:11.779 "superblock": false, 00:18:11.779 "num_base_bdevs": 4, 00:18:11.779 "num_base_bdevs_discovered": 3, 00:18:11.779 "num_base_bdevs_operational": 3, 00:18:11.779 "base_bdevs_list": [ 00:18:11.779 { 00:18:11.779 "name": null, 00:18:11.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.779 "is_configured": false, 00:18:11.779 "data_offset": 0, 00:18:11.779 "data_size": 65536 00:18:11.779 }, 00:18:11.779 { 00:18:11.779 "name": "BaseBdev2", 00:18:11.779 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:11.779 "is_configured": true, 00:18:11.779 "data_offset": 0, 00:18:11.779 "data_size": 65536 00:18:11.779 }, 00:18:11.779 { 00:18:11.779 "name": "BaseBdev3", 00:18:11.779 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:11.779 "is_configured": true, 00:18:11.779 "data_offset": 0, 00:18:11.779 "data_size": 65536 00:18:11.779 }, 00:18:11.779 { 00:18:11.779 "name": "BaseBdev4", 00:18:11.779 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:11.779 "is_configured": true, 00:18:11.779 "data_offset": 0, 00:18:11.779 "data_size": 65536 00:18:11.779 } 00:18:11.779 ] 00:18:11.779 }' 00:18:11.779 14:35:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.779 14:35:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.344 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.345 "name": "raid_bdev1", 00:18:12.345 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:12.345 "strip_size_kb": 64, 00:18:12.345 "state": "online", 00:18:12.345 "raid_level": "raid5f", 00:18:12.345 "superblock": false, 00:18:12.345 "num_base_bdevs": 4, 00:18:12.345 "num_base_bdevs_discovered": 3, 00:18:12.345 "num_base_bdevs_operational": 3, 00:18:12.345 "base_bdevs_list": [ 00:18:12.345 { 00:18:12.345 "name": null, 00:18:12.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.345 "is_configured": false, 00:18:12.345 "data_offset": 0, 00:18:12.345 "data_size": 65536 00:18:12.345 }, 00:18:12.345 { 00:18:12.345 "name": "BaseBdev2", 00:18:12.345 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:12.345 "is_configured": true, 00:18:12.345 "data_offset": 0, 00:18:12.345 "data_size": 65536 00:18:12.345 }, 00:18:12.345 { 00:18:12.345 "name": "BaseBdev3", 00:18:12.345 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:12.345 "is_configured": true, 00:18:12.345 "data_offset": 0, 00:18:12.345 "data_size": 65536 00:18:12.345 }, 00:18:12.345 { 00:18:12.345 "name": "BaseBdev4", 00:18:12.345 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:12.345 "is_configured": true, 00:18:12.345 "data_offset": 0, 00:18:12.345 "data_size": 65536 00:18:12.345 } 00:18:12.345 ] 00:18:12.345 }' 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.345 [2024-11-20 14:35:13.266930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.345 [2024-11-20 14:35:13.281586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.345 14:35:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:12.345 [2024-11-20 14:35:13.290790] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.277 14:35:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.535 "name": "raid_bdev1", 00:18:13.535 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:13.535 "strip_size_kb": 64, 00:18:13.535 "state": "online", 00:18:13.535 "raid_level": "raid5f", 00:18:13.535 "superblock": false, 00:18:13.535 "num_base_bdevs": 4, 00:18:13.535 "num_base_bdevs_discovered": 4, 00:18:13.535 "num_base_bdevs_operational": 4, 00:18:13.535 "process": { 00:18:13.535 "type": "rebuild", 00:18:13.535 "target": "spare", 00:18:13.535 "progress": { 00:18:13.535 "blocks": 17280, 00:18:13.535 "percent": 8 00:18:13.535 } 00:18:13.535 }, 00:18:13.535 "base_bdevs_list": [ 00:18:13.535 { 00:18:13.535 "name": "spare", 00:18:13.535 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:13.535 "is_configured": true, 00:18:13.535 "data_offset": 0, 00:18:13.535 "data_size": 65536 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "name": "BaseBdev2", 00:18:13.535 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:13.535 "is_configured": true, 00:18:13.535 "data_offset": 0, 00:18:13.535 "data_size": 65536 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "name": "BaseBdev3", 00:18:13.535 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:13.535 "is_configured": true, 00:18:13.535 "data_offset": 0, 00:18:13.535 "data_size": 65536 00:18:13.535 }, 00:18:13.535 { 00:18:13.535 "name": "BaseBdev4", 00:18:13.535 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:13.535 "is_configured": true, 00:18:13.535 "data_offset": 0, 00:18:13.535 "data_size": 65536 00:18:13.535 } 00:18:13.535 ] 00:18:13.535 }' 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=676 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.535 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.535 "name": "raid_bdev1", 00:18:13.535 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:13.535 "strip_size_kb": 64, 00:18:13.535 "state": "online", 00:18:13.535 "raid_level": "raid5f", 00:18:13.535 "superblock": false, 00:18:13.535 "num_base_bdevs": 4, 00:18:13.535 "num_base_bdevs_discovered": 4, 00:18:13.535 "num_base_bdevs_operational": 4, 00:18:13.535 "process": { 00:18:13.535 "type": "rebuild", 00:18:13.535 "target": "spare", 00:18:13.535 "progress": { 00:18:13.536 "blocks": 21120, 00:18:13.536 "percent": 10 00:18:13.536 } 00:18:13.536 }, 00:18:13.536 "base_bdevs_list": [ 00:18:13.536 { 00:18:13.536 "name": "spare", 00:18:13.536 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:13.536 "is_configured": true, 00:18:13.536 "data_offset": 0, 00:18:13.536 "data_size": 65536 00:18:13.536 }, 00:18:13.536 { 00:18:13.536 "name": "BaseBdev2", 00:18:13.536 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:13.536 "is_configured": true, 00:18:13.536 "data_offset": 0, 00:18:13.536 "data_size": 65536 00:18:13.536 }, 00:18:13.536 { 00:18:13.536 "name": "BaseBdev3", 00:18:13.536 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:13.536 "is_configured": true, 00:18:13.536 "data_offset": 0, 00:18:13.536 "data_size": 65536 00:18:13.536 }, 00:18:13.536 { 00:18:13.536 "name": "BaseBdev4", 00:18:13.536 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:13.536 "is_configured": true, 00:18:13.536 "data_offset": 0, 00:18:13.536 "data_size": 65536 00:18:13.536 } 00:18:13.536 ] 00:18:13.536 }' 00:18:13.536 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.536 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.536 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.793 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.793 14:35:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.742 "name": "raid_bdev1", 00:18:14.742 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:14.742 "strip_size_kb": 64, 00:18:14.742 "state": "online", 00:18:14.742 "raid_level": "raid5f", 00:18:14.742 "superblock": false, 00:18:14.742 "num_base_bdevs": 4, 00:18:14.742 "num_base_bdevs_discovered": 4, 00:18:14.742 "num_base_bdevs_operational": 4, 00:18:14.742 "process": { 00:18:14.742 "type": "rebuild", 00:18:14.742 "target": "spare", 00:18:14.742 "progress": { 00:18:14.742 "blocks": 42240, 00:18:14.742 "percent": 21 00:18:14.742 } 00:18:14.742 }, 00:18:14.742 "base_bdevs_list": [ 00:18:14.742 { 00:18:14.742 "name": "spare", 00:18:14.742 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:14.742 "is_configured": true, 00:18:14.742 "data_offset": 0, 00:18:14.742 "data_size": 65536 00:18:14.742 }, 00:18:14.742 { 00:18:14.742 "name": "BaseBdev2", 00:18:14.742 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:14.742 "is_configured": true, 00:18:14.742 "data_offset": 0, 00:18:14.742 "data_size": 65536 00:18:14.742 }, 00:18:14.742 { 00:18:14.742 "name": "BaseBdev3", 00:18:14.742 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:14.742 "is_configured": true, 00:18:14.742 "data_offset": 0, 00:18:14.742 "data_size": 65536 00:18:14.742 }, 00:18:14.742 { 00:18:14.742 "name": "BaseBdev4", 00:18:14.742 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:14.742 "is_configured": true, 00:18:14.742 "data_offset": 0, 00:18:14.742 "data_size": 65536 00:18:14.742 } 00:18:14.742 ] 00:18:14.742 }' 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.742 14:35:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.116 "name": "raid_bdev1", 00:18:16.116 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:16.116 "strip_size_kb": 64, 00:18:16.116 "state": "online", 00:18:16.116 "raid_level": "raid5f", 00:18:16.116 "superblock": false, 00:18:16.116 "num_base_bdevs": 4, 00:18:16.116 "num_base_bdevs_discovered": 4, 00:18:16.116 "num_base_bdevs_operational": 4, 00:18:16.116 "process": { 00:18:16.116 "type": "rebuild", 00:18:16.116 "target": "spare", 00:18:16.116 "progress": { 00:18:16.116 "blocks": 65280, 00:18:16.116 "percent": 33 00:18:16.116 } 00:18:16.116 }, 00:18:16.116 "base_bdevs_list": [ 00:18:16.116 { 00:18:16.116 "name": "spare", 00:18:16.116 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:16.116 "is_configured": true, 00:18:16.116 "data_offset": 0, 00:18:16.116 "data_size": 65536 00:18:16.116 }, 00:18:16.116 { 00:18:16.116 "name": "BaseBdev2", 00:18:16.116 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:16.116 "is_configured": true, 00:18:16.116 "data_offset": 0, 00:18:16.116 "data_size": 65536 00:18:16.116 }, 00:18:16.116 { 00:18:16.116 "name": "BaseBdev3", 00:18:16.116 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:16.116 "is_configured": true, 00:18:16.116 "data_offset": 0, 00:18:16.116 "data_size": 65536 00:18:16.116 }, 00:18:16.116 { 00:18:16.116 "name": "BaseBdev4", 00:18:16.116 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:16.116 "is_configured": true, 00:18:16.116 "data_offset": 0, 00:18:16.116 "data_size": 65536 00:18:16.116 } 00:18:16.116 ] 00:18:16.116 }' 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.116 14:35:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.052 "name": "raid_bdev1", 00:18:17.052 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:17.052 "strip_size_kb": 64, 00:18:17.052 "state": "online", 00:18:17.052 "raid_level": "raid5f", 00:18:17.052 "superblock": false, 00:18:17.052 "num_base_bdevs": 4, 00:18:17.052 "num_base_bdevs_discovered": 4, 00:18:17.052 "num_base_bdevs_operational": 4, 00:18:17.052 "process": { 00:18:17.052 "type": "rebuild", 00:18:17.052 "target": "spare", 00:18:17.052 "progress": { 00:18:17.052 "blocks": 86400, 00:18:17.052 "percent": 43 00:18:17.052 } 00:18:17.052 }, 00:18:17.052 "base_bdevs_list": [ 00:18:17.052 { 00:18:17.052 "name": "spare", 00:18:17.052 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:17.052 "is_configured": true, 00:18:17.052 "data_offset": 0, 00:18:17.052 "data_size": 65536 00:18:17.052 }, 00:18:17.052 { 00:18:17.052 "name": "BaseBdev2", 00:18:17.052 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:17.052 "is_configured": true, 00:18:17.052 "data_offset": 0, 00:18:17.052 "data_size": 65536 00:18:17.052 }, 00:18:17.052 { 00:18:17.052 "name": "BaseBdev3", 00:18:17.052 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:17.052 "is_configured": true, 00:18:17.052 "data_offset": 0, 00:18:17.052 "data_size": 65536 00:18:17.052 }, 00:18:17.052 { 00:18:17.052 "name": "BaseBdev4", 00:18:17.052 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:17.052 "is_configured": true, 00:18:17.052 "data_offset": 0, 00:18:17.052 "data_size": 65536 00:18:17.052 } 00:18:17.052 ] 00:18:17.052 }' 00:18:17.052 14:35:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.052 14:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.052 14:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.052 14:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.052 14:35:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.427 "name": "raid_bdev1", 00:18:18.427 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:18.427 "strip_size_kb": 64, 00:18:18.427 "state": "online", 00:18:18.427 "raid_level": "raid5f", 00:18:18.427 "superblock": false, 00:18:18.427 "num_base_bdevs": 4, 00:18:18.427 "num_base_bdevs_discovered": 4, 00:18:18.427 "num_base_bdevs_operational": 4, 00:18:18.427 "process": { 00:18:18.427 "type": "rebuild", 00:18:18.427 "target": "spare", 00:18:18.427 "progress": { 00:18:18.427 "blocks": 109440, 00:18:18.427 "percent": 55 00:18:18.427 } 00:18:18.427 }, 00:18:18.427 "base_bdevs_list": [ 00:18:18.427 { 00:18:18.427 "name": "spare", 00:18:18.427 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:18.427 "is_configured": true, 00:18:18.427 "data_offset": 0, 00:18:18.427 "data_size": 65536 00:18:18.427 }, 00:18:18.427 { 00:18:18.427 "name": "BaseBdev2", 00:18:18.427 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:18.427 "is_configured": true, 00:18:18.427 "data_offset": 0, 00:18:18.427 "data_size": 65536 00:18:18.427 }, 00:18:18.427 { 00:18:18.427 "name": "BaseBdev3", 00:18:18.427 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:18.427 "is_configured": true, 00:18:18.427 "data_offset": 0, 00:18:18.427 "data_size": 65536 00:18:18.427 }, 00:18:18.427 { 00:18:18.427 "name": "BaseBdev4", 00:18:18.427 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:18.427 "is_configured": true, 00:18:18.427 "data_offset": 0, 00:18:18.427 "data_size": 65536 00:18:18.427 } 00:18:18.427 ] 00:18:18.427 }' 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.427 14:35:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.490 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.490 "name": "raid_bdev1", 00:18:19.490 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:19.490 "strip_size_kb": 64, 00:18:19.491 "state": "online", 00:18:19.491 "raid_level": "raid5f", 00:18:19.491 "superblock": false, 00:18:19.491 "num_base_bdevs": 4, 00:18:19.491 "num_base_bdevs_discovered": 4, 00:18:19.491 "num_base_bdevs_operational": 4, 00:18:19.491 "process": { 00:18:19.491 "type": "rebuild", 00:18:19.491 "target": "spare", 00:18:19.491 "progress": { 00:18:19.491 "blocks": 130560, 00:18:19.491 "percent": 66 00:18:19.491 } 00:18:19.491 }, 00:18:19.491 "base_bdevs_list": [ 00:18:19.491 { 00:18:19.491 "name": "spare", 00:18:19.491 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:19.491 "is_configured": true, 00:18:19.491 "data_offset": 0, 00:18:19.491 "data_size": 65536 00:18:19.491 }, 00:18:19.491 { 00:18:19.491 "name": "BaseBdev2", 00:18:19.491 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:19.491 "is_configured": true, 00:18:19.491 "data_offset": 0, 00:18:19.491 "data_size": 65536 00:18:19.491 }, 00:18:19.491 { 00:18:19.491 "name": "BaseBdev3", 00:18:19.491 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:19.491 "is_configured": true, 00:18:19.491 "data_offset": 0, 00:18:19.491 "data_size": 65536 00:18:19.491 }, 00:18:19.491 { 00:18:19.491 "name": "BaseBdev4", 00:18:19.491 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:19.491 "is_configured": true, 00:18:19.491 "data_offset": 0, 00:18:19.491 "data_size": 65536 00:18:19.491 } 00:18:19.491 ] 00:18:19.491 }' 00:18:19.491 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.491 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.491 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.491 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.491 14:35:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.424 "name": "raid_bdev1", 00:18:20.424 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:20.424 "strip_size_kb": 64, 00:18:20.424 "state": "online", 00:18:20.424 "raid_level": "raid5f", 00:18:20.424 "superblock": false, 00:18:20.424 "num_base_bdevs": 4, 00:18:20.424 "num_base_bdevs_discovered": 4, 00:18:20.424 "num_base_bdevs_operational": 4, 00:18:20.424 "process": { 00:18:20.424 "type": "rebuild", 00:18:20.424 "target": "spare", 00:18:20.424 "progress": { 00:18:20.424 "blocks": 153600, 00:18:20.424 "percent": 78 00:18:20.424 } 00:18:20.424 }, 00:18:20.424 "base_bdevs_list": [ 00:18:20.424 { 00:18:20.424 "name": "spare", 00:18:20.424 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:20.424 "is_configured": true, 00:18:20.424 "data_offset": 0, 00:18:20.424 "data_size": 65536 00:18:20.424 }, 00:18:20.424 { 00:18:20.424 "name": "BaseBdev2", 00:18:20.424 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:20.424 "is_configured": true, 00:18:20.424 "data_offset": 0, 00:18:20.424 "data_size": 65536 00:18:20.424 }, 00:18:20.424 { 00:18:20.424 "name": "BaseBdev3", 00:18:20.424 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:20.424 "is_configured": true, 00:18:20.424 "data_offset": 0, 00:18:20.424 "data_size": 65536 00:18:20.424 }, 00:18:20.424 { 00:18:20.424 "name": "BaseBdev4", 00:18:20.424 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:20.424 "is_configured": true, 00:18:20.424 "data_offset": 0, 00:18:20.424 "data_size": 65536 00:18:20.424 } 00:18:20.424 ] 00:18:20.424 }' 00:18:20.424 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.682 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.682 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.682 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.682 14:35:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.616 "name": "raid_bdev1", 00:18:21.616 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:21.616 "strip_size_kb": 64, 00:18:21.616 "state": "online", 00:18:21.616 "raid_level": "raid5f", 00:18:21.616 "superblock": false, 00:18:21.616 "num_base_bdevs": 4, 00:18:21.616 "num_base_bdevs_discovered": 4, 00:18:21.616 "num_base_bdevs_operational": 4, 00:18:21.616 "process": { 00:18:21.616 "type": "rebuild", 00:18:21.616 "target": "spare", 00:18:21.616 "progress": { 00:18:21.616 "blocks": 176640, 00:18:21.616 "percent": 89 00:18:21.616 } 00:18:21.616 }, 00:18:21.616 "base_bdevs_list": [ 00:18:21.616 { 00:18:21.616 "name": "spare", 00:18:21.616 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:21.616 "is_configured": true, 00:18:21.616 "data_offset": 0, 00:18:21.616 "data_size": 65536 00:18:21.616 }, 00:18:21.616 { 00:18:21.616 "name": "BaseBdev2", 00:18:21.616 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:21.616 "is_configured": true, 00:18:21.616 "data_offset": 0, 00:18:21.616 "data_size": 65536 00:18:21.616 }, 00:18:21.616 { 00:18:21.616 "name": "BaseBdev3", 00:18:21.616 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:21.616 "is_configured": true, 00:18:21.616 "data_offset": 0, 00:18:21.616 "data_size": 65536 00:18:21.616 }, 00:18:21.616 { 00:18:21.616 "name": "BaseBdev4", 00:18:21.616 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:21.616 "is_configured": true, 00:18:21.616 "data_offset": 0, 00:18:21.616 "data_size": 65536 00:18:21.616 } 00:18:21.616 ] 00:18:21.616 }' 00:18:21.616 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.874 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.874 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.874 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.874 14:35:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.808 [2024-11-20 14:35:23.696249] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:22.808 [2024-11-20 14:35:23.696604] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:22.808 [2024-11-20 14:35:23.696704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.808 "name": "raid_bdev1", 00:18:22.808 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:22.808 "strip_size_kb": 64, 00:18:22.808 "state": "online", 00:18:22.808 "raid_level": "raid5f", 00:18:22.808 "superblock": false, 00:18:22.808 "num_base_bdevs": 4, 00:18:22.808 "num_base_bdevs_discovered": 4, 00:18:22.808 "num_base_bdevs_operational": 4, 00:18:22.808 "base_bdevs_list": [ 00:18:22.808 { 00:18:22.808 "name": "spare", 00:18:22.808 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:22.808 "is_configured": true, 00:18:22.808 "data_offset": 0, 00:18:22.808 "data_size": 65536 00:18:22.808 }, 00:18:22.808 { 00:18:22.808 "name": "BaseBdev2", 00:18:22.808 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:22.808 "is_configured": true, 00:18:22.808 "data_offset": 0, 00:18:22.808 "data_size": 65536 00:18:22.808 }, 00:18:22.808 { 00:18:22.808 "name": "BaseBdev3", 00:18:22.808 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:22.808 "is_configured": true, 00:18:22.808 "data_offset": 0, 00:18:22.808 "data_size": 65536 00:18:22.808 }, 00:18:22.808 { 00:18:22.808 "name": "BaseBdev4", 00:18:22.808 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:22.808 "is_configured": true, 00:18:22.808 "data_offset": 0, 00:18:22.808 "data_size": 65536 00:18:22.808 } 00:18:22.808 ] 00:18:22.808 }' 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:22.808 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.066 "name": "raid_bdev1", 00:18:23.066 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:23.066 "strip_size_kb": 64, 00:18:23.066 "state": "online", 00:18:23.066 "raid_level": "raid5f", 00:18:23.066 "superblock": false, 00:18:23.066 "num_base_bdevs": 4, 00:18:23.066 "num_base_bdevs_discovered": 4, 00:18:23.066 "num_base_bdevs_operational": 4, 00:18:23.066 "base_bdevs_list": [ 00:18:23.066 { 00:18:23.066 "name": "spare", 00:18:23.066 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:23.066 "is_configured": true, 00:18:23.066 "data_offset": 0, 00:18:23.066 "data_size": 65536 00:18:23.066 }, 00:18:23.066 { 00:18:23.066 "name": "BaseBdev2", 00:18:23.066 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:23.066 "is_configured": true, 00:18:23.066 "data_offset": 0, 00:18:23.066 "data_size": 65536 00:18:23.066 }, 00:18:23.066 { 00:18:23.066 "name": "BaseBdev3", 00:18:23.066 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:23.066 "is_configured": true, 00:18:23.066 "data_offset": 0, 00:18:23.066 "data_size": 65536 00:18:23.066 }, 00:18:23.066 { 00:18:23.066 "name": "BaseBdev4", 00:18:23.066 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:23.066 "is_configured": true, 00:18:23.066 "data_offset": 0, 00:18:23.066 "data_size": 65536 00:18:23.066 } 00:18:23.066 ] 00:18:23.066 }' 00:18:23.066 14:35:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.066 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.325 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.325 "name": "raid_bdev1", 00:18:23.325 "uuid": "6d3b8823-8aed-4470-a786-b32ccc806440", 00:18:23.325 "strip_size_kb": 64, 00:18:23.325 "state": "online", 00:18:23.325 "raid_level": "raid5f", 00:18:23.325 "superblock": false, 00:18:23.325 "num_base_bdevs": 4, 00:18:23.325 "num_base_bdevs_discovered": 4, 00:18:23.325 "num_base_bdevs_operational": 4, 00:18:23.325 "base_bdevs_list": [ 00:18:23.325 { 00:18:23.325 "name": "spare", 00:18:23.325 "uuid": "a505a8b7-97a9-5615-8bfe-e24ad95b4a77", 00:18:23.325 "is_configured": true, 00:18:23.325 "data_offset": 0, 00:18:23.325 "data_size": 65536 00:18:23.325 }, 00:18:23.325 { 00:18:23.325 "name": "BaseBdev2", 00:18:23.325 "uuid": "75c1123e-6dab-55ef-ae9b-085a772bd5a9", 00:18:23.325 "is_configured": true, 00:18:23.325 "data_offset": 0, 00:18:23.325 "data_size": 65536 00:18:23.325 }, 00:18:23.325 { 00:18:23.325 "name": "BaseBdev3", 00:18:23.325 "uuid": "8a3bb32b-e104-58a1-b92c-5818d088cb0a", 00:18:23.325 "is_configured": true, 00:18:23.325 "data_offset": 0, 00:18:23.325 "data_size": 65536 00:18:23.325 }, 00:18:23.325 { 00:18:23.325 "name": "BaseBdev4", 00:18:23.325 "uuid": "0acad667-15b1-5efd-8a5d-5292cd288f52", 00:18:23.325 "is_configured": true, 00:18:23.325 "data_offset": 0, 00:18:23.325 "data_size": 65536 00:18:23.325 } 00:18:23.325 ] 00:18:23.325 }' 00:18:23.325 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.325 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.583 [2024-11-20 14:35:24.603803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.583 [2024-11-20 14:35:24.603851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.583 [2024-11-20 14:35:24.603960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.583 [2024-11-20 14:35:24.604132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.583 [2024-11-20 14:35:24.604147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:23.583 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:23.842 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:24.100 /dev/nbd0 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.100 14:35:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.100 1+0 records in 00:18:24.100 1+0 records out 00:18:24.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382935 s, 10.7 MB/s 00:18:24.100 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.100 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:24.100 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.101 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.101 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:24.101 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.101 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.101 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:24.359 /dev/nbd1 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.359 1+0 records in 00:18:24.359 1+0 records out 00:18:24.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401068 s, 10.2 MB/s 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.359 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:24.618 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:24.618 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.618 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.618 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:24.618 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:24.618 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.618 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:24.876 14:35:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:25.442 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.442 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.442 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.442 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85101 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85101 ']' 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85101 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85101 00:18:25.443 killing process with pid 85101 00:18:25.443 Received shutdown signal, test time was about 60.000000 seconds 00:18:25.443 00:18:25.443 Latency(us) 00:18:25.443 [2024-11-20T14:35:26.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.443 [2024-11-20T14:35:26.500Z] =================================================================================================================== 00:18:25.443 [2024-11-20T14:35:26.500Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85101' 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85101 00:18:25.443 [2024-11-20 14:35:26.232017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.443 14:35:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85101 00:18:25.702 [2024-11-20 14:35:26.683617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.077 ************************************ 00:18:27.077 END TEST raid5f_rebuild_test 00:18:27.077 ************************************ 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:27.077 00:18:27.077 real 0m20.258s 00:18:27.077 user 0m25.080s 00:18:27.077 sys 0m2.376s 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.077 14:35:27 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:27.077 14:35:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:27.077 14:35:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.077 14:35:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.077 ************************************ 00:18:27.077 START TEST raid5f_rebuild_test_sb 00:18:27.077 ************************************ 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85609 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85609 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85609 ']' 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.077 14:35:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.077 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:27.077 Zero copy mechanism will not be used. 00:18:27.077 [2024-11-20 14:35:27.949674] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:18:27.077 [2024-11-20 14:35:27.949871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85609 ] 00:18:27.336 [2024-11-20 14:35:28.138343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.336 [2024-11-20 14:35:28.277842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.594 [2024-11-20 14:35:28.487570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.594 [2024-11-20 14:35:28.487654] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.160 BaseBdev1_malloc 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.160 [2024-11-20 14:35:28.972363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:28.160 [2024-11-20 14:35:28.972443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.160 [2024-11-20 14:35:28.972477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:28.160 [2024-11-20 14:35:28.972497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.160 [2024-11-20 14:35:28.975316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.160 [2024-11-20 14:35:28.975369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.160 BaseBdev1 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.160 14:35:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.160 BaseBdev2_malloc 00:18:28.160 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.160 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:28.160 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.160 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.160 [2024-11-20 14:35:29.024571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:28.160 [2024-11-20 14:35:29.024668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.160 [2024-11-20 14:35:29.024705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:28.161 [2024-11-20 14:35:29.024724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.161 [2024-11-20 14:35:29.027522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.161 [2024-11-20 14:35:29.027574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:28.161 BaseBdev2 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 BaseBdev3_malloc 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 [2024-11-20 14:35:29.085316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:28.161 [2024-11-20 14:35:29.085392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.161 [2024-11-20 14:35:29.085429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:28.161 [2024-11-20 14:35:29.085449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.161 [2024-11-20 14:35:29.088245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.161 [2024-11-20 14:35:29.088299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:28.161 BaseBdev3 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 BaseBdev4_malloc 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 [2024-11-20 14:35:29.137528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:28.161 [2024-11-20 14:35:29.137608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.161 [2024-11-20 14:35:29.137660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:28.161 [2024-11-20 14:35:29.137682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.161 [2024-11-20 14:35:29.140394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.161 [2024-11-20 14:35:29.140451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:28.161 BaseBdev4 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 spare_malloc 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 spare_delay 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 [2024-11-20 14:35:29.194601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.161 [2024-11-20 14:35:29.194694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.161 [2024-11-20 14:35:29.194723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:28.161 [2024-11-20 14:35:29.194742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.161 [2024-11-20 14:35:29.197526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.161 [2024-11-20 14:35:29.197580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.161 spare 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.161 [2024-11-20 14:35:29.202679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.161 [2024-11-20 14:35:29.205124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.161 [2024-11-20 14:35:29.205208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.161 [2024-11-20 14:35:29.205293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:28.161 [2024-11-20 14:35:29.205569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.161 [2024-11-20 14:35:29.205593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:28.161 [2024-11-20 14:35:29.205938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.161 [2024-11-20 14:35:29.212690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.161 [2024-11-20 14:35:29.212720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:28.161 [2024-11-20 14:35:29.212965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.161 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.419 "name": "raid_bdev1", 00:18:28.419 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:28.419 "strip_size_kb": 64, 00:18:28.419 "state": "online", 00:18:28.419 "raid_level": "raid5f", 00:18:28.419 "superblock": true, 00:18:28.419 "num_base_bdevs": 4, 00:18:28.419 "num_base_bdevs_discovered": 4, 00:18:28.419 "num_base_bdevs_operational": 4, 00:18:28.419 "base_bdevs_list": [ 00:18:28.419 { 00:18:28.419 "name": "BaseBdev1", 00:18:28.419 "uuid": "8c847456-6603-5513-bb60-c560c42beb66", 00:18:28.419 "is_configured": true, 00:18:28.419 "data_offset": 2048, 00:18:28.419 "data_size": 63488 00:18:28.419 }, 00:18:28.419 { 00:18:28.419 "name": "BaseBdev2", 00:18:28.419 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:28.419 "is_configured": true, 00:18:28.419 "data_offset": 2048, 00:18:28.419 "data_size": 63488 00:18:28.419 }, 00:18:28.419 { 00:18:28.419 "name": "BaseBdev3", 00:18:28.419 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:28.419 "is_configured": true, 00:18:28.419 "data_offset": 2048, 00:18:28.419 "data_size": 63488 00:18:28.419 }, 00:18:28.419 { 00:18:28.419 "name": "BaseBdev4", 00:18:28.419 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:28.419 "is_configured": true, 00:18:28.419 "data_offset": 2048, 00:18:28.419 "data_size": 63488 00:18:28.419 } 00:18:28.419 ] 00:18:28.419 }' 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.419 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.677 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:28.677 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.677 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.677 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.677 [2024-11-20 14:35:29.728910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:29.031 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.032 14:35:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:29.293 [2024-11-20 14:35:30.124805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:29.293 /dev/nbd0 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.293 1+0 records in 00:18:29.293 1+0 records out 00:18:29.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734129 s, 5.6 MB/s 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:29.293 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:29.858 496+0 records in 00:18:29.858 496+0 records out 00:18:29.858 97517568 bytes (98 MB, 93 MiB) copied, 0.59213 s, 165 MB/s 00:18:29.858 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:29.858 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.858 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:29.858 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:29.858 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:29.858 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.858 14:35:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:30.115 [2024-11-20 14:35:31.081187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.115 [2024-11-20 14:35:31.120972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.115 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.373 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.373 "name": "raid_bdev1", 00:18:30.373 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:30.373 "strip_size_kb": 64, 00:18:30.373 "state": "online", 00:18:30.373 "raid_level": "raid5f", 00:18:30.373 "superblock": true, 00:18:30.373 "num_base_bdevs": 4, 00:18:30.373 "num_base_bdevs_discovered": 3, 00:18:30.373 "num_base_bdevs_operational": 3, 00:18:30.373 "base_bdevs_list": [ 00:18:30.373 { 00:18:30.373 "name": null, 00:18:30.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.373 "is_configured": false, 00:18:30.373 "data_offset": 0, 00:18:30.373 "data_size": 63488 00:18:30.373 }, 00:18:30.373 { 00:18:30.373 "name": "BaseBdev2", 00:18:30.373 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:30.373 "is_configured": true, 00:18:30.373 "data_offset": 2048, 00:18:30.373 "data_size": 63488 00:18:30.373 }, 00:18:30.373 { 00:18:30.373 "name": "BaseBdev3", 00:18:30.373 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:30.373 "is_configured": true, 00:18:30.373 "data_offset": 2048, 00:18:30.373 "data_size": 63488 00:18:30.373 }, 00:18:30.373 { 00:18:30.373 "name": "BaseBdev4", 00:18:30.373 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:30.373 "is_configured": true, 00:18:30.373 "data_offset": 2048, 00:18:30.373 "data_size": 63488 00:18:30.373 } 00:18:30.373 ] 00:18:30.373 }' 00:18:30.373 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.373 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.632 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.632 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.632 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.632 [2024-11-20 14:35:31.633319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.632 [2024-11-20 14:35:31.648690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:30.632 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.632 14:35:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:30.632 [2024-11-20 14:35:31.658463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.005 "name": "raid_bdev1", 00:18:32.005 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:32.005 "strip_size_kb": 64, 00:18:32.005 "state": "online", 00:18:32.005 "raid_level": "raid5f", 00:18:32.005 "superblock": true, 00:18:32.005 "num_base_bdevs": 4, 00:18:32.005 "num_base_bdevs_discovered": 4, 00:18:32.005 "num_base_bdevs_operational": 4, 00:18:32.005 "process": { 00:18:32.005 "type": "rebuild", 00:18:32.005 "target": "spare", 00:18:32.005 "progress": { 00:18:32.005 "blocks": 17280, 00:18:32.005 "percent": 9 00:18:32.005 } 00:18:32.005 }, 00:18:32.005 "base_bdevs_list": [ 00:18:32.005 { 00:18:32.005 "name": "spare", 00:18:32.005 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:32.005 "is_configured": true, 00:18:32.005 "data_offset": 2048, 00:18:32.005 "data_size": 63488 00:18:32.005 }, 00:18:32.005 { 00:18:32.005 "name": "BaseBdev2", 00:18:32.005 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:32.005 "is_configured": true, 00:18:32.005 "data_offset": 2048, 00:18:32.005 "data_size": 63488 00:18:32.005 }, 00:18:32.005 { 00:18:32.005 "name": "BaseBdev3", 00:18:32.005 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:32.005 "is_configured": true, 00:18:32.005 "data_offset": 2048, 00:18:32.005 "data_size": 63488 00:18:32.005 }, 00:18:32.005 { 00:18:32.005 "name": "BaseBdev4", 00:18:32.005 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:32.005 "is_configured": true, 00:18:32.005 "data_offset": 2048, 00:18:32.005 "data_size": 63488 00:18:32.005 } 00:18:32.005 ] 00:18:32.005 }' 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.005 [2024-11-20 14:35:32.816119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.005 [2024-11-20 14:35:32.871977] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:32.005 [2024-11-20 14:35:32.872287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.005 [2024-11-20 14:35:32.872327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.005 [2024-11-20 14:35:32.872343] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.005 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.006 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.006 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.006 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.006 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.006 "name": "raid_bdev1", 00:18:32.006 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:32.006 "strip_size_kb": 64, 00:18:32.006 "state": "online", 00:18:32.006 "raid_level": "raid5f", 00:18:32.006 "superblock": true, 00:18:32.006 "num_base_bdevs": 4, 00:18:32.006 "num_base_bdevs_discovered": 3, 00:18:32.006 "num_base_bdevs_operational": 3, 00:18:32.006 "base_bdevs_list": [ 00:18:32.006 { 00:18:32.006 "name": null, 00:18:32.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.006 "is_configured": false, 00:18:32.006 "data_offset": 0, 00:18:32.006 "data_size": 63488 00:18:32.006 }, 00:18:32.006 { 00:18:32.006 "name": "BaseBdev2", 00:18:32.006 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:32.006 "is_configured": true, 00:18:32.006 "data_offset": 2048, 00:18:32.006 "data_size": 63488 00:18:32.006 }, 00:18:32.006 { 00:18:32.006 "name": "BaseBdev3", 00:18:32.006 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:32.006 "is_configured": true, 00:18:32.006 "data_offset": 2048, 00:18:32.006 "data_size": 63488 00:18:32.006 }, 00:18:32.006 { 00:18:32.006 "name": "BaseBdev4", 00:18:32.006 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:32.006 "is_configured": true, 00:18:32.006 "data_offset": 2048, 00:18:32.006 "data_size": 63488 00:18:32.006 } 00:18:32.006 ] 00:18:32.006 }' 00:18:32.006 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.006 14:35:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.571 "name": "raid_bdev1", 00:18:32.571 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:32.571 "strip_size_kb": 64, 00:18:32.571 "state": "online", 00:18:32.571 "raid_level": "raid5f", 00:18:32.571 "superblock": true, 00:18:32.571 "num_base_bdevs": 4, 00:18:32.571 "num_base_bdevs_discovered": 3, 00:18:32.571 "num_base_bdevs_operational": 3, 00:18:32.571 "base_bdevs_list": [ 00:18:32.571 { 00:18:32.571 "name": null, 00:18:32.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.571 "is_configured": false, 00:18:32.571 "data_offset": 0, 00:18:32.571 "data_size": 63488 00:18:32.571 }, 00:18:32.571 { 00:18:32.571 "name": "BaseBdev2", 00:18:32.571 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:32.571 "is_configured": true, 00:18:32.571 "data_offset": 2048, 00:18:32.571 "data_size": 63488 00:18:32.571 }, 00:18:32.571 { 00:18:32.571 "name": "BaseBdev3", 00:18:32.571 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:32.571 "is_configured": true, 00:18:32.571 "data_offset": 2048, 00:18:32.571 "data_size": 63488 00:18:32.571 }, 00:18:32.571 { 00:18:32.571 "name": "BaseBdev4", 00:18:32.571 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:32.571 "is_configured": true, 00:18:32.571 "data_offset": 2048, 00:18:32.571 "data_size": 63488 00:18:32.571 } 00:18:32.571 ] 00:18:32.571 }' 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.571 [2024-11-20 14:35:33.574416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.571 [2024-11-20 14:35:33.587949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.571 14:35:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:32.571 [2024-11-20 14:35:33.597060] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.946 "name": "raid_bdev1", 00:18:33.946 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:33.946 "strip_size_kb": 64, 00:18:33.946 "state": "online", 00:18:33.946 "raid_level": "raid5f", 00:18:33.946 "superblock": true, 00:18:33.946 "num_base_bdevs": 4, 00:18:33.946 "num_base_bdevs_discovered": 4, 00:18:33.946 "num_base_bdevs_operational": 4, 00:18:33.946 "process": { 00:18:33.946 "type": "rebuild", 00:18:33.946 "target": "spare", 00:18:33.946 "progress": { 00:18:33.946 "blocks": 17280, 00:18:33.946 "percent": 9 00:18:33.946 } 00:18:33.946 }, 00:18:33.946 "base_bdevs_list": [ 00:18:33.946 { 00:18:33.946 "name": "spare", 00:18:33.946 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 }, 00:18:33.946 { 00:18:33.946 "name": "BaseBdev2", 00:18:33.946 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 }, 00:18:33.946 { 00:18:33.946 "name": "BaseBdev3", 00:18:33.946 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 }, 00:18:33.946 { 00:18:33.946 "name": "BaseBdev4", 00:18:33.946 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 } 00:18:33.946 ] 00:18:33.946 }' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:33.946 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=696 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.946 "name": "raid_bdev1", 00:18:33.946 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:33.946 "strip_size_kb": 64, 00:18:33.946 "state": "online", 00:18:33.946 "raid_level": "raid5f", 00:18:33.946 "superblock": true, 00:18:33.946 "num_base_bdevs": 4, 00:18:33.946 "num_base_bdevs_discovered": 4, 00:18:33.946 "num_base_bdevs_operational": 4, 00:18:33.946 "process": { 00:18:33.946 "type": "rebuild", 00:18:33.946 "target": "spare", 00:18:33.946 "progress": { 00:18:33.946 "blocks": 21120, 00:18:33.946 "percent": 11 00:18:33.946 } 00:18:33.946 }, 00:18:33.946 "base_bdevs_list": [ 00:18:33.946 { 00:18:33.946 "name": "spare", 00:18:33.946 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 }, 00:18:33.946 { 00:18:33.946 "name": "BaseBdev2", 00:18:33.946 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 }, 00:18:33.946 { 00:18:33.946 "name": "BaseBdev3", 00:18:33.946 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 }, 00:18:33.946 { 00:18:33.946 "name": "BaseBdev4", 00:18:33.946 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:33.946 "is_configured": true, 00:18:33.946 "data_offset": 2048, 00:18:33.946 "data_size": 63488 00:18:33.946 } 00:18:33.946 ] 00:18:33.946 }' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.946 14:35:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.880 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.138 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.138 "name": "raid_bdev1", 00:18:35.138 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:35.138 "strip_size_kb": 64, 00:18:35.138 "state": "online", 00:18:35.138 "raid_level": "raid5f", 00:18:35.138 "superblock": true, 00:18:35.138 "num_base_bdevs": 4, 00:18:35.138 "num_base_bdevs_discovered": 4, 00:18:35.138 "num_base_bdevs_operational": 4, 00:18:35.138 "process": { 00:18:35.138 "type": "rebuild", 00:18:35.138 "target": "spare", 00:18:35.138 "progress": { 00:18:35.138 "blocks": 44160, 00:18:35.138 "percent": 23 00:18:35.138 } 00:18:35.138 }, 00:18:35.138 "base_bdevs_list": [ 00:18:35.138 { 00:18:35.138 "name": "spare", 00:18:35.139 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:35.139 "is_configured": true, 00:18:35.139 "data_offset": 2048, 00:18:35.139 "data_size": 63488 00:18:35.139 }, 00:18:35.139 { 00:18:35.139 "name": "BaseBdev2", 00:18:35.139 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:35.139 "is_configured": true, 00:18:35.139 "data_offset": 2048, 00:18:35.139 "data_size": 63488 00:18:35.139 }, 00:18:35.139 { 00:18:35.139 "name": "BaseBdev3", 00:18:35.139 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:35.139 "is_configured": true, 00:18:35.139 "data_offset": 2048, 00:18:35.139 "data_size": 63488 00:18:35.139 }, 00:18:35.139 { 00:18:35.139 "name": "BaseBdev4", 00:18:35.139 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:35.139 "is_configured": true, 00:18:35.139 "data_offset": 2048, 00:18:35.139 "data_size": 63488 00:18:35.139 } 00:18:35.139 ] 00:18:35.139 }' 00:18:35.139 14:35:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.139 14:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.139 14:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.139 14:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.139 14:35:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.139 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.139 "name": "raid_bdev1", 00:18:36.139 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:36.139 "strip_size_kb": 64, 00:18:36.139 "state": "online", 00:18:36.139 "raid_level": "raid5f", 00:18:36.139 "superblock": true, 00:18:36.139 "num_base_bdevs": 4, 00:18:36.139 "num_base_bdevs_discovered": 4, 00:18:36.139 "num_base_bdevs_operational": 4, 00:18:36.139 "process": { 00:18:36.139 "type": "rebuild", 00:18:36.139 "target": "spare", 00:18:36.139 "progress": { 00:18:36.139 "blocks": 65280, 00:18:36.139 "percent": 34 00:18:36.139 } 00:18:36.139 }, 00:18:36.139 "base_bdevs_list": [ 00:18:36.139 { 00:18:36.139 "name": "spare", 00:18:36.139 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:36.139 "is_configured": true, 00:18:36.139 "data_offset": 2048, 00:18:36.139 "data_size": 63488 00:18:36.139 }, 00:18:36.139 { 00:18:36.139 "name": "BaseBdev2", 00:18:36.139 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:36.139 "is_configured": true, 00:18:36.139 "data_offset": 2048, 00:18:36.139 "data_size": 63488 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "name": "BaseBdev3", 00:18:36.140 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:36.140 "is_configured": true, 00:18:36.140 "data_offset": 2048, 00:18:36.140 "data_size": 63488 00:18:36.140 }, 00:18:36.140 { 00:18:36.140 "name": "BaseBdev4", 00:18:36.140 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:36.140 "is_configured": true, 00:18:36.140 "data_offset": 2048, 00:18:36.140 "data_size": 63488 00:18:36.140 } 00:18:36.140 ] 00:18:36.140 }' 00:18:36.140 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.140 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.140 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.397 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.397 14:35:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.332 "name": "raid_bdev1", 00:18:37.332 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:37.332 "strip_size_kb": 64, 00:18:37.332 "state": "online", 00:18:37.332 "raid_level": "raid5f", 00:18:37.332 "superblock": true, 00:18:37.332 "num_base_bdevs": 4, 00:18:37.332 "num_base_bdevs_discovered": 4, 00:18:37.332 "num_base_bdevs_operational": 4, 00:18:37.332 "process": { 00:18:37.332 "type": "rebuild", 00:18:37.332 "target": "spare", 00:18:37.332 "progress": { 00:18:37.332 "blocks": 86400, 00:18:37.332 "percent": 45 00:18:37.332 } 00:18:37.332 }, 00:18:37.332 "base_bdevs_list": [ 00:18:37.332 { 00:18:37.332 "name": "spare", 00:18:37.332 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:37.332 "is_configured": true, 00:18:37.332 "data_offset": 2048, 00:18:37.332 "data_size": 63488 00:18:37.332 }, 00:18:37.332 { 00:18:37.332 "name": "BaseBdev2", 00:18:37.332 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:37.332 "is_configured": true, 00:18:37.332 "data_offset": 2048, 00:18:37.332 "data_size": 63488 00:18:37.332 }, 00:18:37.332 { 00:18:37.332 "name": "BaseBdev3", 00:18:37.332 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:37.332 "is_configured": true, 00:18:37.332 "data_offset": 2048, 00:18:37.332 "data_size": 63488 00:18:37.332 }, 00:18:37.332 { 00:18:37.332 "name": "BaseBdev4", 00:18:37.332 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:37.332 "is_configured": true, 00:18:37.332 "data_offset": 2048, 00:18:37.332 "data_size": 63488 00:18:37.332 } 00:18:37.332 ] 00:18:37.332 }' 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.332 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.590 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.590 14:35:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.523 "name": "raid_bdev1", 00:18:38.523 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:38.523 "strip_size_kb": 64, 00:18:38.523 "state": "online", 00:18:38.523 "raid_level": "raid5f", 00:18:38.523 "superblock": true, 00:18:38.523 "num_base_bdevs": 4, 00:18:38.523 "num_base_bdevs_discovered": 4, 00:18:38.523 "num_base_bdevs_operational": 4, 00:18:38.523 "process": { 00:18:38.523 "type": "rebuild", 00:18:38.523 "target": "spare", 00:18:38.523 "progress": { 00:18:38.523 "blocks": 109440, 00:18:38.523 "percent": 57 00:18:38.523 } 00:18:38.523 }, 00:18:38.523 "base_bdevs_list": [ 00:18:38.523 { 00:18:38.523 "name": "spare", 00:18:38.523 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:38.523 "is_configured": true, 00:18:38.523 "data_offset": 2048, 00:18:38.523 "data_size": 63488 00:18:38.523 }, 00:18:38.523 { 00:18:38.523 "name": "BaseBdev2", 00:18:38.523 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:38.523 "is_configured": true, 00:18:38.523 "data_offset": 2048, 00:18:38.523 "data_size": 63488 00:18:38.523 }, 00:18:38.523 { 00:18:38.523 "name": "BaseBdev3", 00:18:38.523 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:38.523 "is_configured": true, 00:18:38.523 "data_offset": 2048, 00:18:38.523 "data_size": 63488 00:18:38.523 }, 00:18:38.523 { 00:18:38.523 "name": "BaseBdev4", 00:18:38.523 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:38.523 "is_configured": true, 00:18:38.523 "data_offset": 2048, 00:18:38.523 "data_size": 63488 00:18:38.523 } 00:18:38.523 ] 00:18:38.523 }' 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.523 14:35:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.897 "name": "raid_bdev1", 00:18:39.897 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:39.897 "strip_size_kb": 64, 00:18:39.897 "state": "online", 00:18:39.897 "raid_level": "raid5f", 00:18:39.897 "superblock": true, 00:18:39.897 "num_base_bdevs": 4, 00:18:39.897 "num_base_bdevs_discovered": 4, 00:18:39.897 "num_base_bdevs_operational": 4, 00:18:39.897 "process": { 00:18:39.897 "type": "rebuild", 00:18:39.897 "target": "spare", 00:18:39.897 "progress": { 00:18:39.897 "blocks": 132480, 00:18:39.897 "percent": 69 00:18:39.897 } 00:18:39.897 }, 00:18:39.897 "base_bdevs_list": [ 00:18:39.897 { 00:18:39.897 "name": "spare", 00:18:39.897 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev2", 00:18:39.897 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev3", 00:18:39.897 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 }, 00:18:39.897 { 00:18:39.897 "name": "BaseBdev4", 00:18:39.897 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:39.897 "is_configured": true, 00:18:39.897 "data_offset": 2048, 00:18:39.897 "data_size": 63488 00:18:39.897 } 00:18:39.897 ] 00:18:39.897 }' 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.897 14:35:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.830 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.830 "name": "raid_bdev1", 00:18:40.830 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:40.830 "strip_size_kb": 64, 00:18:40.830 "state": "online", 00:18:40.830 "raid_level": "raid5f", 00:18:40.830 "superblock": true, 00:18:40.830 "num_base_bdevs": 4, 00:18:40.830 "num_base_bdevs_discovered": 4, 00:18:40.830 "num_base_bdevs_operational": 4, 00:18:40.830 "process": { 00:18:40.830 "type": "rebuild", 00:18:40.830 "target": "spare", 00:18:40.830 "progress": { 00:18:40.830 "blocks": 153600, 00:18:40.830 "percent": 80 00:18:40.830 } 00:18:40.830 }, 00:18:40.830 "base_bdevs_list": [ 00:18:40.830 { 00:18:40.830 "name": "spare", 00:18:40.830 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:40.830 "is_configured": true, 00:18:40.830 "data_offset": 2048, 00:18:40.830 "data_size": 63488 00:18:40.830 }, 00:18:40.830 { 00:18:40.830 "name": "BaseBdev2", 00:18:40.830 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:40.830 "is_configured": true, 00:18:40.830 "data_offset": 2048, 00:18:40.830 "data_size": 63488 00:18:40.830 }, 00:18:40.830 { 00:18:40.830 "name": "BaseBdev3", 00:18:40.830 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:40.830 "is_configured": true, 00:18:40.830 "data_offset": 2048, 00:18:40.830 "data_size": 63488 00:18:40.830 }, 00:18:40.830 { 00:18:40.830 "name": "BaseBdev4", 00:18:40.830 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:40.830 "is_configured": true, 00:18:40.830 "data_offset": 2048, 00:18:40.830 "data_size": 63488 00:18:40.830 } 00:18:40.830 ] 00:18:40.830 }' 00:18:40.831 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.831 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.831 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.831 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.831 14:35:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.203 "name": "raid_bdev1", 00:18:42.203 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:42.203 "strip_size_kb": 64, 00:18:42.203 "state": "online", 00:18:42.203 "raid_level": "raid5f", 00:18:42.203 "superblock": true, 00:18:42.203 "num_base_bdevs": 4, 00:18:42.203 "num_base_bdevs_discovered": 4, 00:18:42.203 "num_base_bdevs_operational": 4, 00:18:42.203 "process": { 00:18:42.203 "type": "rebuild", 00:18:42.203 "target": "spare", 00:18:42.203 "progress": { 00:18:42.203 "blocks": 176640, 00:18:42.203 "percent": 92 00:18:42.203 } 00:18:42.203 }, 00:18:42.203 "base_bdevs_list": [ 00:18:42.203 { 00:18:42.203 "name": "spare", 00:18:42.203 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:42.203 "is_configured": true, 00:18:42.203 "data_offset": 2048, 00:18:42.203 "data_size": 63488 00:18:42.203 }, 00:18:42.203 { 00:18:42.203 "name": "BaseBdev2", 00:18:42.203 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:42.203 "is_configured": true, 00:18:42.203 "data_offset": 2048, 00:18:42.203 "data_size": 63488 00:18:42.203 }, 00:18:42.203 { 00:18:42.203 "name": "BaseBdev3", 00:18:42.203 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:42.203 "is_configured": true, 00:18:42.203 "data_offset": 2048, 00:18:42.203 "data_size": 63488 00:18:42.203 }, 00:18:42.203 { 00:18:42.203 "name": "BaseBdev4", 00:18:42.203 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:42.203 "is_configured": true, 00:18:42.203 "data_offset": 2048, 00:18:42.203 "data_size": 63488 00:18:42.203 } 00:18:42.203 ] 00:18:42.203 }' 00:18:42.203 14:35:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.203 14:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.203 14:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.203 14:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.203 14:35:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.769 [2024-11-20 14:35:43.696585] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:42.769 [2024-11-20 14:35:43.696741] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:42.769 [2024-11-20 14:35:43.696929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.026 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.285 "name": "raid_bdev1", 00:18:43.285 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:43.285 "strip_size_kb": 64, 00:18:43.285 "state": "online", 00:18:43.285 "raid_level": "raid5f", 00:18:43.285 "superblock": true, 00:18:43.285 "num_base_bdevs": 4, 00:18:43.285 "num_base_bdevs_discovered": 4, 00:18:43.285 "num_base_bdevs_operational": 4, 00:18:43.285 "base_bdevs_list": [ 00:18:43.285 { 00:18:43.285 "name": "spare", 00:18:43.285 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 }, 00:18:43.285 { 00:18:43.285 "name": "BaseBdev2", 00:18:43.285 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 }, 00:18:43.285 { 00:18:43.285 "name": "BaseBdev3", 00:18:43.285 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 }, 00:18:43.285 { 00:18:43.285 "name": "BaseBdev4", 00:18:43.285 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 } 00:18:43.285 ] 00:18:43.285 }' 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.285 "name": "raid_bdev1", 00:18:43.285 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:43.285 "strip_size_kb": 64, 00:18:43.285 "state": "online", 00:18:43.285 "raid_level": "raid5f", 00:18:43.285 "superblock": true, 00:18:43.285 "num_base_bdevs": 4, 00:18:43.285 "num_base_bdevs_discovered": 4, 00:18:43.285 "num_base_bdevs_operational": 4, 00:18:43.285 "base_bdevs_list": [ 00:18:43.285 { 00:18:43.285 "name": "spare", 00:18:43.285 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 }, 00:18:43.285 { 00:18:43.285 "name": "BaseBdev2", 00:18:43.285 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 }, 00:18:43.285 { 00:18:43.285 "name": "BaseBdev3", 00:18:43.285 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 }, 00:18:43.285 { 00:18:43.285 "name": "BaseBdev4", 00:18:43.285 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:43.285 "is_configured": true, 00:18:43.285 "data_offset": 2048, 00:18:43.285 "data_size": 63488 00:18:43.285 } 00:18:43.285 ] 00:18:43.285 }' 00:18:43.285 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.544 "name": "raid_bdev1", 00:18:43.544 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:43.544 "strip_size_kb": 64, 00:18:43.544 "state": "online", 00:18:43.544 "raid_level": "raid5f", 00:18:43.544 "superblock": true, 00:18:43.544 "num_base_bdevs": 4, 00:18:43.544 "num_base_bdevs_discovered": 4, 00:18:43.544 "num_base_bdevs_operational": 4, 00:18:43.544 "base_bdevs_list": [ 00:18:43.544 { 00:18:43.544 "name": "spare", 00:18:43.544 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:43.544 "is_configured": true, 00:18:43.544 "data_offset": 2048, 00:18:43.544 "data_size": 63488 00:18:43.544 }, 00:18:43.544 { 00:18:43.544 "name": "BaseBdev2", 00:18:43.544 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:43.544 "is_configured": true, 00:18:43.544 "data_offset": 2048, 00:18:43.544 "data_size": 63488 00:18:43.544 }, 00:18:43.544 { 00:18:43.544 "name": "BaseBdev3", 00:18:43.544 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:43.544 "is_configured": true, 00:18:43.544 "data_offset": 2048, 00:18:43.544 "data_size": 63488 00:18:43.544 }, 00:18:43.544 { 00:18:43.544 "name": "BaseBdev4", 00:18:43.544 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:43.544 "is_configured": true, 00:18:43.544 "data_offset": 2048, 00:18:43.544 "data_size": 63488 00:18:43.544 } 00:18:43.544 ] 00:18:43.544 }' 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.544 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.110 [2024-11-20 14:35:44.913440] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.110 [2024-11-20 14:35:44.913482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.110 [2024-11-20 14:35:44.913671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.110 [2024-11-20 14:35:44.913806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.110 [2024-11-20 14:35:44.913838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.110 14:35:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:44.368 /dev/nbd0 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.368 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.368 1+0 records in 00:18:44.368 1+0 records out 00:18:44.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181974 s, 22.5 MB/s 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.369 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:44.627 /dev/nbd1 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.627 1+0 records in 00:18:44.627 1+0 records out 00:18:44.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394675 s, 10.4 MB/s 00:18:44.627 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.884 14:35:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.141 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.707 [2024-11-20 14:35:46.527707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.707 [2024-11-20 14:35:46.527907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.707 [2024-11-20 14:35:46.527957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:45.707 [2024-11-20 14:35:46.527975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.707 [2024-11-20 14:35:46.531275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.707 [2024-11-20 14:35:46.531455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.707 [2024-11-20 14:35:46.531584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:45.707 [2024-11-20 14:35:46.531720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.707 [2024-11-20 14:35:46.531910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.707 [2024-11-20 14:35:46.532156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.707 [2024-11-20 14:35:46.532279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:45.707 spare 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.707 [2024-11-20 14:35:46.632426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:45.707 [2024-11-20 14:35:46.632473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:45.707 [2024-11-20 14:35:46.632896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:45.707 [2024-11-20 14:35:46.639574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:45.707 [2024-11-20 14:35:46.639600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:45.707 [2024-11-20 14:35:46.639893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.707 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.707 "name": "raid_bdev1", 00:18:45.707 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:45.707 "strip_size_kb": 64, 00:18:45.707 "state": "online", 00:18:45.707 "raid_level": "raid5f", 00:18:45.707 "superblock": true, 00:18:45.707 "num_base_bdevs": 4, 00:18:45.707 "num_base_bdevs_discovered": 4, 00:18:45.707 "num_base_bdevs_operational": 4, 00:18:45.707 "base_bdevs_list": [ 00:18:45.707 { 00:18:45.707 "name": "spare", 00:18:45.707 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:45.707 "is_configured": true, 00:18:45.707 "data_offset": 2048, 00:18:45.707 "data_size": 63488 00:18:45.707 }, 00:18:45.707 { 00:18:45.707 "name": "BaseBdev2", 00:18:45.707 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:45.707 "is_configured": true, 00:18:45.707 "data_offset": 2048, 00:18:45.707 "data_size": 63488 00:18:45.708 }, 00:18:45.708 { 00:18:45.708 "name": "BaseBdev3", 00:18:45.708 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:45.708 "is_configured": true, 00:18:45.708 "data_offset": 2048, 00:18:45.708 "data_size": 63488 00:18:45.708 }, 00:18:45.708 { 00:18:45.708 "name": "BaseBdev4", 00:18:45.708 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:45.708 "is_configured": true, 00:18:45.708 "data_offset": 2048, 00:18:45.708 "data_size": 63488 00:18:45.708 } 00:18:45.708 ] 00:18:45.708 }' 00:18:45.708 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.708 14:35:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.273 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.274 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.274 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.274 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.274 "name": "raid_bdev1", 00:18:46.274 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:46.274 "strip_size_kb": 64, 00:18:46.274 "state": "online", 00:18:46.274 "raid_level": "raid5f", 00:18:46.274 "superblock": true, 00:18:46.274 "num_base_bdevs": 4, 00:18:46.274 "num_base_bdevs_discovered": 4, 00:18:46.274 "num_base_bdevs_operational": 4, 00:18:46.274 "base_bdevs_list": [ 00:18:46.274 { 00:18:46.274 "name": "spare", 00:18:46.274 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:46.274 "is_configured": true, 00:18:46.274 "data_offset": 2048, 00:18:46.274 "data_size": 63488 00:18:46.274 }, 00:18:46.274 { 00:18:46.274 "name": "BaseBdev2", 00:18:46.274 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:46.274 "is_configured": true, 00:18:46.274 "data_offset": 2048, 00:18:46.274 "data_size": 63488 00:18:46.274 }, 00:18:46.274 { 00:18:46.274 "name": "BaseBdev3", 00:18:46.274 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:46.274 "is_configured": true, 00:18:46.274 "data_offset": 2048, 00:18:46.274 "data_size": 63488 00:18:46.274 }, 00:18:46.274 { 00:18:46.274 "name": "BaseBdev4", 00:18:46.274 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:46.274 "is_configured": true, 00:18:46.274 "data_offset": 2048, 00:18:46.274 "data_size": 63488 00:18:46.274 } 00:18:46.274 ] 00:18:46.274 }' 00:18:46.274 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.274 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.274 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.532 [2024-11-20 14:35:47.391708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.532 "name": "raid_bdev1", 00:18:46.532 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:46.532 "strip_size_kb": 64, 00:18:46.532 "state": "online", 00:18:46.532 "raid_level": "raid5f", 00:18:46.532 "superblock": true, 00:18:46.532 "num_base_bdevs": 4, 00:18:46.532 "num_base_bdevs_discovered": 3, 00:18:46.532 "num_base_bdevs_operational": 3, 00:18:46.532 "base_bdevs_list": [ 00:18:46.532 { 00:18:46.532 "name": null, 00:18:46.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.532 "is_configured": false, 00:18:46.532 "data_offset": 0, 00:18:46.532 "data_size": 63488 00:18:46.532 }, 00:18:46.532 { 00:18:46.532 "name": "BaseBdev2", 00:18:46.532 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:46.532 "is_configured": true, 00:18:46.532 "data_offset": 2048, 00:18:46.532 "data_size": 63488 00:18:46.532 }, 00:18:46.532 { 00:18:46.532 "name": "BaseBdev3", 00:18:46.532 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:46.532 "is_configured": true, 00:18:46.532 "data_offset": 2048, 00:18:46.532 "data_size": 63488 00:18:46.532 }, 00:18:46.532 { 00:18:46.532 "name": "BaseBdev4", 00:18:46.532 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:46.532 "is_configured": true, 00:18:46.532 "data_offset": 2048, 00:18:46.532 "data_size": 63488 00:18:46.532 } 00:18:46.532 ] 00:18:46.532 }' 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.532 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.098 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.098 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.098 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.098 [2024-11-20 14:35:47.927941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.098 [2024-11-20 14:35:47.928241] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.098 [2024-11-20 14:35:47.928272] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:47.098 [2024-11-20 14:35:47.928321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.098 [2024-11-20 14:35:47.941769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:47.098 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.098 14:35:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:47.098 [2024-11-20 14:35:47.950568] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.034 "name": "raid_bdev1", 00:18:48.034 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:48.034 "strip_size_kb": 64, 00:18:48.034 "state": "online", 00:18:48.034 "raid_level": "raid5f", 00:18:48.034 "superblock": true, 00:18:48.034 "num_base_bdevs": 4, 00:18:48.034 "num_base_bdevs_discovered": 4, 00:18:48.034 "num_base_bdevs_operational": 4, 00:18:48.034 "process": { 00:18:48.034 "type": "rebuild", 00:18:48.034 "target": "spare", 00:18:48.034 "progress": { 00:18:48.034 "blocks": 17280, 00:18:48.034 "percent": 9 00:18:48.034 } 00:18:48.034 }, 00:18:48.034 "base_bdevs_list": [ 00:18:48.034 { 00:18:48.034 "name": "spare", 00:18:48.034 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:48.034 "is_configured": true, 00:18:48.034 "data_offset": 2048, 00:18:48.034 "data_size": 63488 00:18:48.034 }, 00:18:48.034 { 00:18:48.034 "name": "BaseBdev2", 00:18:48.034 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:48.034 "is_configured": true, 00:18:48.034 "data_offset": 2048, 00:18:48.034 "data_size": 63488 00:18:48.034 }, 00:18:48.034 { 00:18:48.034 "name": "BaseBdev3", 00:18:48.034 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:48.034 "is_configured": true, 00:18:48.034 "data_offset": 2048, 00:18:48.034 "data_size": 63488 00:18:48.034 }, 00:18:48.034 { 00:18:48.034 "name": "BaseBdev4", 00:18:48.034 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:48.034 "is_configured": true, 00:18:48.034 "data_offset": 2048, 00:18:48.034 "data_size": 63488 00:18:48.034 } 00:18:48.034 ] 00:18:48.034 }' 00:18:48.034 14:35:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.034 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.034 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.295 [2024-11-20 14:35:49.104236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.295 [2024-11-20 14:35:49.161185] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:48.295 [2024-11-20 14:35:49.161288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.295 [2024-11-20 14:35:49.161314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.295 [2024-11-20 14:35:49.161331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.295 "name": "raid_bdev1", 00:18:48.295 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:48.295 "strip_size_kb": 64, 00:18:48.295 "state": "online", 00:18:48.295 "raid_level": "raid5f", 00:18:48.295 "superblock": true, 00:18:48.295 "num_base_bdevs": 4, 00:18:48.295 "num_base_bdevs_discovered": 3, 00:18:48.295 "num_base_bdevs_operational": 3, 00:18:48.295 "base_bdevs_list": [ 00:18:48.295 { 00:18:48.295 "name": null, 00:18:48.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.295 "is_configured": false, 00:18:48.295 "data_offset": 0, 00:18:48.295 "data_size": 63488 00:18:48.295 }, 00:18:48.295 { 00:18:48.295 "name": "BaseBdev2", 00:18:48.295 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:48.295 "is_configured": true, 00:18:48.295 "data_offset": 2048, 00:18:48.295 "data_size": 63488 00:18:48.295 }, 00:18:48.295 { 00:18:48.295 "name": "BaseBdev3", 00:18:48.295 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:48.295 "is_configured": true, 00:18:48.295 "data_offset": 2048, 00:18:48.295 "data_size": 63488 00:18:48.295 }, 00:18:48.295 { 00:18:48.295 "name": "BaseBdev4", 00:18:48.295 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:48.295 "is_configured": true, 00:18:48.295 "data_offset": 2048, 00:18:48.295 "data_size": 63488 00:18:48.295 } 00:18:48.295 ] 00:18:48.295 }' 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.295 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.885 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:48.885 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.885 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.885 [2024-11-20 14:35:49.688833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:48.885 [2024-11-20 14:35:49.689056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.885 [2024-11-20 14:35:49.689222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:48.885 [2024-11-20 14:35:49.689256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.885 [2024-11-20 14:35:49.689971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.885 [2024-11-20 14:35:49.690018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:48.885 [2024-11-20 14:35:49.690220] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:48.885 [2024-11-20 14:35:49.690248] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.885 [2024-11-20 14:35:49.690262] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:48.885 [2024-11-20 14:35:49.690304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.885 [2024-11-20 14:35:49.704723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:48.885 spare 00:18:48.885 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.885 14:35:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:48.885 [2024-11-20 14:35:49.714206] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.820 "name": "raid_bdev1", 00:18:49.820 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:49.820 "strip_size_kb": 64, 00:18:49.820 "state": "online", 00:18:49.820 "raid_level": "raid5f", 00:18:49.820 "superblock": true, 00:18:49.820 "num_base_bdevs": 4, 00:18:49.820 "num_base_bdevs_discovered": 4, 00:18:49.820 "num_base_bdevs_operational": 4, 00:18:49.820 "process": { 00:18:49.820 "type": "rebuild", 00:18:49.820 "target": "spare", 00:18:49.820 "progress": { 00:18:49.820 "blocks": 17280, 00:18:49.820 "percent": 9 00:18:49.820 } 00:18:49.820 }, 00:18:49.820 "base_bdevs_list": [ 00:18:49.820 { 00:18:49.820 "name": "spare", 00:18:49.820 "uuid": "08a1cecd-6680-5b55-b806-8e9472d905a2", 00:18:49.820 "is_configured": true, 00:18:49.820 "data_offset": 2048, 00:18:49.820 "data_size": 63488 00:18:49.820 }, 00:18:49.820 { 00:18:49.820 "name": "BaseBdev2", 00:18:49.820 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:49.820 "is_configured": true, 00:18:49.820 "data_offset": 2048, 00:18:49.820 "data_size": 63488 00:18:49.820 }, 00:18:49.820 { 00:18:49.820 "name": "BaseBdev3", 00:18:49.820 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:49.820 "is_configured": true, 00:18:49.820 "data_offset": 2048, 00:18:49.820 "data_size": 63488 00:18:49.820 }, 00:18:49.820 { 00:18:49.820 "name": "BaseBdev4", 00:18:49.820 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:49.820 "is_configured": true, 00:18:49.820 "data_offset": 2048, 00:18:49.820 "data_size": 63488 00:18:49.820 } 00:18:49.820 ] 00:18:49.820 }' 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.820 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.821 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 [2024-11-20 14:35:50.884202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.079 [2024-11-20 14:35:50.926103] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.079 [2024-11-20 14:35:50.926361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.079 [2024-11-20 14:35:50.926543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.079 [2024-11-20 14:35:50.926567] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.079 14:35:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.079 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.079 "name": "raid_bdev1", 00:18:50.079 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:50.079 "strip_size_kb": 64, 00:18:50.079 "state": "online", 00:18:50.079 "raid_level": "raid5f", 00:18:50.079 "superblock": true, 00:18:50.079 "num_base_bdevs": 4, 00:18:50.079 "num_base_bdevs_discovered": 3, 00:18:50.079 "num_base_bdevs_operational": 3, 00:18:50.079 "base_bdevs_list": [ 00:18:50.079 { 00:18:50.079 "name": null, 00:18:50.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.079 "is_configured": false, 00:18:50.079 "data_offset": 0, 00:18:50.079 "data_size": 63488 00:18:50.079 }, 00:18:50.079 { 00:18:50.079 "name": "BaseBdev2", 00:18:50.079 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:50.079 "is_configured": true, 00:18:50.079 "data_offset": 2048, 00:18:50.079 "data_size": 63488 00:18:50.079 }, 00:18:50.079 { 00:18:50.079 "name": "BaseBdev3", 00:18:50.079 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:50.079 "is_configured": true, 00:18:50.079 "data_offset": 2048, 00:18:50.079 "data_size": 63488 00:18:50.079 }, 00:18:50.079 { 00:18:50.079 "name": "BaseBdev4", 00:18:50.079 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:50.079 "is_configured": true, 00:18:50.079 "data_offset": 2048, 00:18:50.079 "data_size": 63488 00:18:50.079 } 00:18:50.079 ] 00:18:50.079 }' 00:18:50.079 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.079 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.646 "name": "raid_bdev1", 00:18:50.646 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:50.646 "strip_size_kb": 64, 00:18:50.646 "state": "online", 00:18:50.646 "raid_level": "raid5f", 00:18:50.646 "superblock": true, 00:18:50.646 "num_base_bdevs": 4, 00:18:50.646 "num_base_bdevs_discovered": 3, 00:18:50.646 "num_base_bdevs_operational": 3, 00:18:50.646 "base_bdevs_list": [ 00:18:50.646 { 00:18:50.646 "name": null, 00:18:50.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.646 "is_configured": false, 00:18:50.646 "data_offset": 0, 00:18:50.646 "data_size": 63488 00:18:50.646 }, 00:18:50.646 { 00:18:50.646 "name": "BaseBdev2", 00:18:50.646 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:50.646 "is_configured": true, 00:18:50.646 "data_offset": 2048, 00:18:50.646 "data_size": 63488 00:18:50.646 }, 00:18:50.646 { 00:18:50.646 "name": "BaseBdev3", 00:18:50.646 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:50.646 "is_configured": true, 00:18:50.646 "data_offset": 2048, 00:18:50.646 "data_size": 63488 00:18:50.646 }, 00:18:50.646 { 00:18:50.646 "name": "BaseBdev4", 00:18:50.646 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:50.646 "is_configured": true, 00:18:50.646 "data_offset": 2048, 00:18:50.646 "data_size": 63488 00:18:50.646 } 00:18:50.646 ] 00:18:50.646 }' 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.646 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.647 [2024-11-20 14:35:51.654895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:50.647 [2024-11-20 14:35:51.654966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.647 [2024-11-20 14:35:51.655017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:50.647 [2024-11-20 14:35:51.655033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.647 [2024-11-20 14:35:51.655667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.647 [2024-11-20 14:35:51.655716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:50.647 [2024-11-20 14:35:51.655847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:50.647 [2024-11-20 14:35:51.655869] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.647 [2024-11-20 14:35:51.655886] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:50.647 [2024-11-20 14:35:51.655901] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:50.647 BaseBdev1 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.647 14:35:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.022 "name": "raid_bdev1", 00:18:52.022 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:52.022 "strip_size_kb": 64, 00:18:52.022 "state": "online", 00:18:52.022 "raid_level": "raid5f", 00:18:52.022 "superblock": true, 00:18:52.022 "num_base_bdevs": 4, 00:18:52.022 "num_base_bdevs_discovered": 3, 00:18:52.022 "num_base_bdevs_operational": 3, 00:18:52.022 "base_bdevs_list": [ 00:18:52.022 { 00:18:52.022 "name": null, 00:18:52.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.022 "is_configured": false, 00:18:52.022 "data_offset": 0, 00:18:52.022 "data_size": 63488 00:18:52.022 }, 00:18:52.022 { 00:18:52.022 "name": "BaseBdev2", 00:18:52.022 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:52.022 "is_configured": true, 00:18:52.022 "data_offset": 2048, 00:18:52.022 "data_size": 63488 00:18:52.022 }, 00:18:52.022 { 00:18:52.022 "name": "BaseBdev3", 00:18:52.022 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:52.022 "is_configured": true, 00:18:52.022 "data_offset": 2048, 00:18:52.022 "data_size": 63488 00:18:52.022 }, 00:18:52.022 { 00:18:52.022 "name": "BaseBdev4", 00:18:52.022 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:52.022 "is_configured": true, 00:18:52.022 "data_offset": 2048, 00:18:52.022 "data_size": 63488 00:18:52.022 } 00:18:52.022 ] 00:18:52.022 }' 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.022 14:35:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.280 "name": "raid_bdev1", 00:18:52.280 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:52.280 "strip_size_kb": 64, 00:18:52.280 "state": "online", 00:18:52.280 "raid_level": "raid5f", 00:18:52.280 "superblock": true, 00:18:52.280 "num_base_bdevs": 4, 00:18:52.280 "num_base_bdevs_discovered": 3, 00:18:52.280 "num_base_bdevs_operational": 3, 00:18:52.280 "base_bdevs_list": [ 00:18:52.280 { 00:18:52.280 "name": null, 00:18:52.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.280 "is_configured": false, 00:18:52.280 "data_offset": 0, 00:18:52.280 "data_size": 63488 00:18:52.280 }, 00:18:52.280 { 00:18:52.280 "name": "BaseBdev2", 00:18:52.280 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:52.280 "is_configured": true, 00:18:52.280 "data_offset": 2048, 00:18:52.280 "data_size": 63488 00:18:52.280 }, 00:18:52.280 { 00:18:52.280 "name": "BaseBdev3", 00:18:52.280 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:52.280 "is_configured": true, 00:18:52.280 "data_offset": 2048, 00:18:52.280 "data_size": 63488 00:18:52.280 }, 00:18:52.280 { 00:18:52.280 "name": "BaseBdev4", 00:18:52.280 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:52.280 "is_configured": true, 00:18:52.280 "data_offset": 2048, 00:18:52.280 "data_size": 63488 00:18:52.280 } 00:18:52.280 ] 00:18:52.280 }' 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.280 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.539 [2024-11-20 14:35:53.355619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.539 [2024-11-20 14:35:53.355875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.539 [2024-11-20 14:35:53.355901] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:52.539 request: 00:18:52.539 { 00:18:52.539 "base_bdev": "BaseBdev1", 00:18:52.539 "raid_bdev": "raid_bdev1", 00:18:52.539 "method": "bdev_raid_add_base_bdev", 00:18:52.539 "req_id": 1 00:18:52.539 } 00:18:52.539 Got JSON-RPC error response 00:18:52.539 response: 00:18:52.539 { 00:18:52.539 "code": -22, 00:18:52.539 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:52.539 } 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.539 14:35:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.475 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.475 "name": "raid_bdev1", 00:18:53.475 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:53.475 "strip_size_kb": 64, 00:18:53.475 "state": "online", 00:18:53.475 "raid_level": "raid5f", 00:18:53.475 "superblock": true, 00:18:53.475 "num_base_bdevs": 4, 00:18:53.475 "num_base_bdevs_discovered": 3, 00:18:53.475 "num_base_bdevs_operational": 3, 00:18:53.475 "base_bdevs_list": [ 00:18:53.475 { 00:18:53.475 "name": null, 00:18:53.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.475 "is_configured": false, 00:18:53.475 "data_offset": 0, 00:18:53.475 "data_size": 63488 00:18:53.475 }, 00:18:53.475 { 00:18:53.475 "name": "BaseBdev2", 00:18:53.475 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:53.475 "is_configured": true, 00:18:53.475 "data_offset": 2048, 00:18:53.476 "data_size": 63488 00:18:53.476 }, 00:18:53.476 { 00:18:53.476 "name": "BaseBdev3", 00:18:53.476 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:53.476 "is_configured": true, 00:18:53.476 "data_offset": 2048, 00:18:53.476 "data_size": 63488 00:18:53.476 }, 00:18:53.476 { 00:18:53.476 "name": "BaseBdev4", 00:18:53.476 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:53.476 "is_configured": true, 00:18:53.476 "data_offset": 2048, 00:18:53.476 "data_size": 63488 00:18:53.476 } 00:18:53.476 ] 00:18:53.476 }' 00:18:53.476 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.476 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.138 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.138 "name": "raid_bdev1", 00:18:54.138 "uuid": "ba98455a-4933-4c43-aff3-92a3ae346f78", 00:18:54.138 "strip_size_kb": 64, 00:18:54.138 "state": "online", 00:18:54.138 "raid_level": "raid5f", 00:18:54.138 "superblock": true, 00:18:54.138 "num_base_bdevs": 4, 00:18:54.138 "num_base_bdevs_discovered": 3, 00:18:54.138 "num_base_bdevs_operational": 3, 00:18:54.138 "base_bdevs_list": [ 00:18:54.138 { 00:18:54.138 "name": null, 00:18:54.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.138 "is_configured": false, 00:18:54.138 "data_offset": 0, 00:18:54.138 "data_size": 63488 00:18:54.138 }, 00:18:54.138 { 00:18:54.138 "name": "BaseBdev2", 00:18:54.138 "uuid": "05f8b932-d8d1-52d5-8240-fd3b1129f249", 00:18:54.138 "is_configured": true, 00:18:54.138 "data_offset": 2048, 00:18:54.138 "data_size": 63488 00:18:54.138 }, 00:18:54.138 { 00:18:54.138 "name": "BaseBdev3", 00:18:54.138 "uuid": "46dab85b-daa9-56ab-9b67-ec5fe9f75614", 00:18:54.138 "is_configured": true, 00:18:54.138 "data_offset": 2048, 00:18:54.138 "data_size": 63488 00:18:54.138 }, 00:18:54.138 { 00:18:54.138 "name": "BaseBdev4", 00:18:54.138 "uuid": "ee7b9839-5054-5ecd-91e5-1ae9867f49a9", 00:18:54.138 "is_configured": true, 00:18:54.138 "data_offset": 2048, 00:18:54.138 "data_size": 63488 00:18:54.138 } 00:18:54.138 ] 00:18:54.138 }' 00:18:54.139 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.139 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.139 14:35:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85609 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85609 ']' 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85609 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85609 00:18:54.139 killing process with pid 85609 00:18:54.139 Received shutdown signal, test time was about 60.000000 seconds 00:18:54.139 00:18:54.139 Latency(us) 00:18:54.139 [2024-11-20T14:35:55.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.139 [2024-11-20T14:35:55.196Z] =================================================================================================================== 00:18:54.139 [2024-11-20T14:35:55.196Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85609' 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85609 00:18:54.139 14:35:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85609 00:18:54.139 [2024-11-20 14:35:55.080397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:54.139 [2024-11-20 14:35:55.080558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.139 [2024-11-20 14:35:55.080687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.139 [2024-11-20 14:35:55.080889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:54.705 [2024-11-20 14:35:55.515562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:55.641 ************************************ 00:18:55.641 END TEST raid5f_rebuild_test_sb 00:18:55.641 ************************************ 00:18:55.641 14:35:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:55.641 00:18:55.641 real 0m28.706s 00:18:55.641 user 0m37.369s 00:18:55.641 sys 0m2.833s 00:18:55.641 14:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.641 14:35:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.641 14:35:56 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:55.641 14:35:56 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:55.641 14:35:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:55.641 14:35:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.641 14:35:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.641 ************************************ 00:18:55.641 START TEST raid_state_function_test_sb_4k 00:18:55.641 ************************************ 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:55.641 Process raid pid: 86432 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86432 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86432' 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86432 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86432 ']' 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.641 14:35:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.899 [2024-11-20 14:35:56.706102] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:18:55.899 [2024-11-20 14:35:56.706481] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.899 [2024-11-20 14:35:56.892145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.157 [2024-11-20 14:35:57.021806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.415 [2024-11-20 14:35:57.230447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.415 [2024-11-20 14:35:57.230813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.673 [2024-11-20 14:35:57.668251] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.673 [2024-11-20 14:35:57.668485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.673 [2024-11-20 14:35:57.668663] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.673 [2024-11-20 14:35:57.668807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.673 "name": "Existed_Raid", 00:18:56.673 "uuid": "1aadd7fe-82f3-4251-9758-9521ad952c4c", 00:18:56.673 "strip_size_kb": 0, 00:18:56.673 "state": "configuring", 00:18:56.673 "raid_level": "raid1", 00:18:56.673 "superblock": true, 00:18:56.673 "num_base_bdevs": 2, 00:18:56.673 "num_base_bdevs_discovered": 0, 00:18:56.673 "num_base_bdevs_operational": 2, 00:18:56.673 "base_bdevs_list": [ 00:18:56.673 { 00:18:56.673 "name": "BaseBdev1", 00:18:56.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.673 "is_configured": false, 00:18:56.673 "data_offset": 0, 00:18:56.673 "data_size": 0 00:18:56.673 }, 00:18:56.673 { 00:18:56.673 "name": "BaseBdev2", 00:18:56.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.673 "is_configured": false, 00:18:56.673 "data_offset": 0, 00:18:56.673 "data_size": 0 00:18:56.673 } 00:18:56.673 ] 00:18:56.673 }' 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.673 14:35:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.240 [2024-11-20 14:35:58.172333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.240 [2024-11-20 14:35:58.172380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.240 [2024-11-20 14:35:58.180305] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.240 [2024-11-20 14:35:58.180506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.240 [2024-11-20 14:35:58.180534] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.240 [2024-11-20 14:35:58.180560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.240 [2024-11-20 14:35:58.226235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.240 BaseBdev1 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:57.240 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.241 [ 00:18:57.241 { 00:18:57.241 "name": "BaseBdev1", 00:18:57.241 "aliases": [ 00:18:57.241 "ede3e0b2-9e12-4943-a4a1-525c87c68fed" 00:18:57.241 ], 00:18:57.241 "product_name": "Malloc disk", 00:18:57.241 "block_size": 4096, 00:18:57.241 "num_blocks": 8192, 00:18:57.241 "uuid": "ede3e0b2-9e12-4943-a4a1-525c87c68fed", 00:18:57.241 "assigned_rate_limits": { 00:18:57.241 "rw_ios_per_sec": 0, 00:18:57.241 "rw_mbytes_per_sec": 0, 00:18:57.241 "r_mbytes_per_sec": 0, 00:18:57.241 "w_mbytes_per_sec": 0 00:18:57.241 }, 00:18:57.241 "claimed": true, 00:18:57.241 "claim_type": "exclusive_write", 00:18:57.241 "zoned": false, 00:18:57.241 "supported_io_types": { 00:18:57.241 "read": true, 00:18:57.241 "write": true, 00:18:57.241 "unmap": true, 00:18:57.241 "flush": true, 00:18:57.241 "reset": true, 00:18:57.241 "nvme_admin": false, 00:18:57.241 "nvme_io": false, 00:18:57.241 "nvme_io_md": false, 00:18:57.241 "write_zeroes": true, 00:18:57.241 "zcopy": true, 00:18:57.241 "get_zone_info": false, 00:18:57.241 "zone_management": false, 00:18:57.241 "zone_append": false, 00:18:57.241 "compare": false, 00:18:57.241 "compare_and_write": false, 00:18:57.241 "abort": true, 00:18:57.241 "seek_hole": false, 00:18:57.241 "seek_data": false, 00:18:57.241 "copy": true, 00:18:57.241 "nvme_iov_md": false 00:18:57.241 }, 00:18:57.241 "memory_domains": [ 00:18:57.241 { 00:18:57.241 "dma_device_id": "system", 00:18:57.241 "dma_device_type": 1 00:18:57.241 }, 00:18:57.241 { 00:18:57.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.241 "dma_device_type": 2 00:18:57.241 } 00:18:57.241 ], 00:18:57.241 "driver_specific": {} 00:18:57.241 } 00:18:57.241 ] 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.241 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.499 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.499 "name": "Existed_Raid", 00:18:57.499 "uuid": "1cc539ce-0b51-4554-b4ab-83f1bfb580ed", 00:18:57.499 "strip_size_kb": 0, 00:18:57.499 "state": "configuring", 00:18:57.499 "raid_level": "raid1", 00:18:57.499 "superblock": true, 00:18:57.499 "num_base_bdevs": 2, 00:18:57.499 "num_base_bdevs_discovered": 1, 00:18:57.499 "num_base_bdevs_operational": 2, 00:18:57.499 "base_bdevs_list": [ 00:18:57.499 { 00:18:57.499 "name": "BaseBdev1", 00:18:57.499 "uuid": "ede3e0b2-9e12-4943-a4a1-525c87c68fed", 00:18:57.499 "is_configured": true, 00:18:57.499 "data_offset": 256, 00:18:57.499 "data_size": 7936 00:18:57.499 }, 00:18:57.499 { 00:18:57.499 "name": "BaseBdev2", 00:18:57.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.499 "is_configured": false, 00:18:57.499 "data_offset": 0, 00:18:57.499 "data_size": 0 00:18:57.499 } 00:18:57.499 ] 00:18:57.499 }' 00:18:57.499 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.499 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.757 [2024-11-20 14:35:58.774426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.757 [2024-11-20 14:35:58.774691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.757 [2024-11-20 14:35:58.782462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.757 [2024-11-20 14:35:58.785094] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.757 [2024-11-20 14:35:58.785300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.757 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.016 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.016 "name": "Existed_Raid", 00:18:58.016 "uuid": "7650815e-ad6e-42af-92da-523420df59da", 00:18:58.016 "strip_size_kb": 0, 00:18:58.016 "state": "configuring", 00:18:58.016 "raid_level": "raid1", 00:18:58.016 "superblock": true, 00:18:58.016 "num_base_bdevs": 2, 00:18:58.016 "num_base_bdevs_discovered": 1, 00:18:58.016 "num_base_bdevs_operational": 2, 00:18:58.016 "base_bdevs_list": [ 00:18:58.016 { 00:18:58.016 "name": "BaseBdev1", 00:18:58.016 "uuid": "ede3e0b2-9e12-4943-a4a1-525c87c68fed", 00:18:58.016 "is_configured": true, 00:18:58.016 "data_offset": 256, 00:18:58.016 "data_size": 7936 00:18:58.016 }, 00:18:58.016 { 00:18:58.016 "name": "BaseBdev2", 00:18:58.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.016 "is_configured": false, 00:18:58.016 "data_offset": 0, 00:18:58.016 "data_size": 0 00:18:58.016 } 00:18:58.016 ] 00:18:58.016 }' 00:18:58.016 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.016 14:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.274 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:58.274 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.274 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.533 [2024-11-20 14:35:59.334605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.533 [2024-11-20 14:35:59.334984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:58.533 [2024-11-20 14:35:59.335004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:58.533 BaseBdev2 00:18:58.533 [2024-11-20 14:35:59.335360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:58.533 [2024-11-20 14:35:59.335580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:58.533 [2024-11-20 14:35:59.335606] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:58.533 [2024-11-20 14:35:59.335814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.533 [ 00:18:58.533 { 00:18:58.533 "name": "BaseBdev2", 00:18:58.533 "aliases": [ 00:18:58.533 "edf835b0-6b87-4356-a5d8-854221a13b3a" 00:18:58.533 ], 00:18:58.533 "product_name": "Malloc disk", 00:18:58.533 "block_size": 4096, 00:18:58.533 "num_blocks": 8192, 00:18:58.533 "uuid": "edf835b0-6b87-4356-a5d8-854221a13b3a", 00:18:58.533 "assigned_rate_limits": { 00:18:58.533 "rw_ios_per_sec": 0, 00:18:58.533 "rw_mbytes_per_sec": 0, 00:18:58.533 "r_mbytes_per_sec": 0, 00:18:58.533 "w_mbytes_per_sec": 0 00:18:58.533 }, 00:18:58.533 "claimed": true, 00:18:58.533 "claim_type": "exclusive_write", 00:18:58.533 "zoned": false, 00:18:58.533 "supported_io_types": { 00:18:58.533 "read": true, 00:18:58.533 "write": true, 00:18:58.533 "unmap": true, 00:18:58.533 "flush": true, 00:18:58.533 "reset": true, 00:18:58.533 "nvme_admin": false, 00:18:58.533 "nvme_io": false, 00:18:58.533 "nvme_io_md": false, 00:18:58.533 "write_zeroes": true, 00:18:58.533 "zcopy": true, 00:18:58.533 "get_zone_info": false, 00:18:58.533 "zone_management": false, 00:18:58.533 "zone_append": false, 00:18:58.533 "compare": false, 00:18:58.533 "compare_and_write": false, 00:18:58.533 "abort": true, 00:18:58.533 "seek_hole": false, 00:18:58.533 "seek_data": false, 00:18:58.533 "copy": true, 00:18:58.533 "nvme_iov_md": false 00:18:58.533 }, 00:18:58.533 "memory_domains": [ 00:18:58.533 { 00:18:58.533 "dma_device_id": "system", 00:18:58.533 "dma_device_type": 1 00:18:58.533 }, 00:18:58.533 { 00:18:58.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.533 "dma_device_type": 2 00:18:58.533 } 00:18:58.533 ], 00:18:58.533 "driver_specific": {} 00:18:58.533 } 00:18:58.533 ] 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.533 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.534 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.534 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.534 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.534 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.534 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.534 "name": "Existed_Raid", 00:18:58.534 "uuid": "7650815e-ad6e-42af-92da-523420df59da", 00:18:58.534 "strip_size_kb": 0, 00:18:58.534 "state": "online", 00:18:58.534 "raid_level": "raid1", 00:18:58.534 "superblock": true, 00:18:58.534 "num_base_bdevs": 2, 00:18:58.534 "num_base_bdevs_discovered": 2, 00:18:58.534 "num_base_bdevs_operational": 2, 00:18:58.534 "base_bdevs_list": [ 00:18:58.534 { 00:18:58.534 "name": "BaseBdev1", 00:18:58.534 "uuid": "ede3e0b2-9e12-4943-a4a1-525c87c68fed", 00:18:58.534 "is_configured": true, 00:18:58.534 "data_offset": 256, 00:18:58.534 "data_size": 7936 00:18:58.534 }, 00:18:58.534 { 00:18:58.534 "name": "BaseBdev2", 00:18:58.534 "uuid": "edf835b0-6b87-4356-a5d8-854221a13b3a", 00:18:58.534 "is_configured": true, 00:18:58.534 "data_offset": 256, 00:18:58.534 "data_size": 7936 00:18:58.534 } 00:18:58.534 ] 00:18:58.534 }' 00:18:58.534 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.534 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.101 [2024-11-20 14:35:59.919203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.101 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:59.101 "name": "Existed_Raid", 00:18:59.101 "aliases": [ 00:18:59.101 "7650815e-ad6e-42af-92da-523420df59da" 00:18:59.101 ], 00:18:59.101 "product_name": "Raid Volume", 00:18:59.101 "block_size": 4096, 00:18:59.101 "num_blocks": 7936, 00:18:59.101 "uuid": "7650815e-ad6e-42af-92da-523420df59da", 00:18:59.101 "assigned_rate_limits": { 00:18:59.101 "rw_ios_per_sec": 0, 00:18:59.101 "rw_mbytes_per_sec": 0, 00:18:59.101 "r_mbytes_per_sec": 0, 00:18:59.101 "w_mbytes_per_sec": 0 00:18:59.101 }, 00:18:59.101 "claimed": false, 00:18:59.101 "zoned": false, 00:18:59.101 "supported_io_types": { 00:18:59.101 "read": true, 00:18:59.101 "write": true, 00:18:59.101 "unmap": false, 00:18:59.101 "flush": false, 00:18:59.101 "reset": true, 00:18:59.101 "nvme_admin": false, 00:18:59.101 "nvme_io": false, 00:18:59.101 "nvme_io_md": false, 00:18:59.101 "write_zeroes": true, 00:18:59.101 "zcopy": false, 00:18:59.101 "get_zone_info": false, 00:18:59.101 "zone_management": false, 00:18:59.101 "zone_append": false, 00:18:59.101 "compare": false, 00:18:59.101 "compare_and_write": false, 00:18:59.101 "abort": false, 00:18:59.101 "seek_hole": false, 00:18:59.101 "seek_data": false, 00:18:59.101 "copy": false, 00:18:59.101 "nvme_iov_md": false 00:18:59.101 }, 00:18:59.101 "memory_domains": [ 00:18:59.101 { 00:18:59.101 "dma_device_id": "system", 00:18:59.101 "dma_device_type": 1 00:18:59.101 }, 00:18:59.101 { 00:18:59.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.101 "dma_device_type": 2 00:18:59.101 }, 00:18:59.101 { 00:18:59.101 "dma_device_id": "system", 00:18:59.101 "dma_device_type": 1 00:18:59.101 }, 00:18:59.101 { 00:18:59.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.101 "dma_device_type": 2 00:18:59.101 } 00:18:59.101 ], 00:18:59.101 "driver_specific": { 00:18:59.101 "raid": { 00:18:59.101 "uuid": "7650815e-ad6e-42af-92da-523420df59da", 00:18:59.101 "strip_size_kb": 0, 00:18:59.101 "state": "online", 00:18:59.101 "raid_level": "raid1", 00:18:59.101 "superblock": true, 00:18:59.101 "num_base_bdevs": 2, 00:18:59.101 "num_base_bdevs_discovered": 2, 00:18:59.101 "num_base_bdevs_operational": 2, 00:18:59.101 "base_bdevs_list": [ 00:18:59.101 { 00:18:59.101 "name": "BaseBdev1", 00:18:59.101 "uuid": "ede3e0b2-9e12-4943-a4a1-525c87c68fed", 00:18:59.101 "is_configured": true, 00:18:59.101 "data_offset": 256, 00:18:59.101 "data_size": 7936 00:18:59.101 }, 00:18:59.101 { 00:18:59.101 "name": "BaseBdev2", 00:18:59.101 "uuid": "edf835b0-6b87-4356-a5d8-854221a13b3a", 00:18:59.101 "is_configured": true, 00:18:59.102 "data_offset": 256, 00:18:59.102 "data_size": 7936 00:18:59.102 } 00:18:59.102 ] 00:18:59.102 } 00:18:59.102 } 00:18:59.102 }' 00:18:59.102 14:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:59.102 BaseBdev2' 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:59.102 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.359 [2024-11-20 14:36:00.182963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:59.359 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.360 "name": "Existed_Raid", 00:18:59.360 "uuid": "7650815e-ad6e-42af-92da-523420df59da", 00:18:59.360 "strip_size_kb": 0, 00:18:59.360 "state": "online", 00:18:59.360 "raid_level": "raid1", 00:18:59.360 "superblock": true, 00:18:59.360 "num_base_bdevs": 2, 00:18:59.360 "num_base_bdevs_discovered": 1, 00:18:59.360 "num_base_bdevs_operational": 1, 00:18:59.360 "base_bdevs_list": [ 00:18:59.360 { 00:18:59.360 "name": null, 00:18:59.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.360 "is_configured": false, 00:18:59.360 "data_offset": 0, 00:18:59.360 "data_size": 7936 00:18:59.360 }, 00:18:59.360 { 00:18:59.360 "name": "BaseBdev2", 00:18:59.360 "uuid": "edf835b0-6b87-4356-a5d8-854221a13b3a", 00:18:59.360 "is_configured": true, 00:18:59.360 "data_offset": 256, 00:18:59.360 "data_size": 7936 00:18:59.360 } 00:18:59.360 ] 00:18:59.360 }' 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.360 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.927 [2024-11-20 14:36:00.815711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:59.927 [2024-11-20 14:36:00.815855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.927 [2024-11-20 14:36:00.899311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.927 [2024-11-20 14:36:00.899392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.927 [2024-11-20 14:36:00.899415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86432 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86432 ']' 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86432 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.927 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86432 00:19:00.186 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.186 killing process with pid 86432 00:19:00.186 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.186 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86432' 00:19:00.186 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86432 00:19:00.186 [2024-11-20 14:36:00.984342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:00.186 14:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86432 00:19:00.186 [2024-11-20 14:36:00.999190] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:01.119 14:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:01.119 00:19:01.119 real 0m5.473s 00:19:01.119 user 0m8.263s 00:19:01.119 sys 0m0.779s 00:19:01.119 14:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.119 14:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.119 ************************************ 00:19:01.119 END TEST raid_state_function_test_sb_4k 00:19:01.119 ************************************ 00:19:01.119 14:36:02 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:01.119 14:36:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:01.119 14:36:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.119 14:36:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.119 ************************************ 00:19:01.119 START TEST raid_superblock_test_4k 00:19:01.119 ************************************ 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86686 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86686 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86686 ']' 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.119 14:36:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.377 [2024-11-20 14:36:02.217810] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:19:01.377 [2024-11-20 14:36:02.218021] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86686 ] 00:19:01.377 [2024-11-20 14:36:02.394180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.635 [2024-11-20 14:36:02.524136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.892 [2024-11-20 14:36:02.733001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.892 [2024-11-20 14:36:02.733068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.150 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.409 malloc1 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.409 [2024-11-20 14:36:03.217114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.409 [2024-11-20 14:36:03.217204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.409 [2024-11-20 14:36:03.217240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:02.409 [2024-11-20 14:36:03.217257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.409 [2024-11-20 14:36:03.220197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.409 [2024-11-20 14:36:03.220241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.409 pt1 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:02.409 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.410 malloc2 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.410 [2024-11-20 14:36:03.272822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.410 [2024-11-20 14:36:03.272906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.410 [2024-11-20 14:36:03.272946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:02.410 [2024-11-20 14:36:03.272962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.410 [2024-11-20 14:36:03.275888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.410 [2024-11-20 14:36:03.276107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.410 pt2 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.410 [2024-11-20 14:36:03.280990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.410 [2024-11-20 14:36:03.283493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.410 [2024-11-20 14:36:03.283934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:02.410 [2024-11-20 14:36:03.283967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.410 [2024-11-20 14:36:03.284313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:02.410 [2024-11-20 14:36:03.284515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:02.410 [2024-11-20 14:36:03.284545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:02.410 [2024-11-20 14:36:03.284773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.410 "name": "raid_bdev1", 00:19:02.410 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:02.410 "strip_size_kb": 0, 00:19:02.410 "state": "online", 00:19:02.410 "raid_level": "raid1", 00:19:02.410 "superblock": true, 00:19:02.410 "num_base_bdevs": 2, 00:19:02.410 "num_base_bdevs_discovered": 2, 00:19:02.410 "num_base_bdevs_operational": 2, 00:19:02.410 "base_bdevs_list": [ 00:19:02.410 { 00:19:02.410 "name": "pt1", 00:19:02.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.410 "is_configured": true, 00:19:02.410 "data_offset": 256, 00:19:02.410 "data_size": 7936 00:19:02.410 }, 00:19:02.410 { 00:19:02.410 "name": "pt2", 00:19:02.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.410 "is_configured": true, 00:19:02.410 "data_offset": 256, 00:19:02.410 "data_size": 7936 00:19:02.410 } 00:19:02.410 ] 00:19:02.410 }' 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.410 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.977 [2024-11-20 14:36:03.809482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.977 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.977 "name": "raid_bdev1", 00:19:02.977 "aliases": [ 00:19:02.977 "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d" 00:19:02.977 ], 00:19:02.978 "product_name": "Raid Volume", 00:19:02.978 "block_size": 4096, 00:19:02.978 "num_blocks": 7936, 00:19:02.978 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:02.978 "assigned_rate_limits": { 00:19:02.978 "rw_ios_per_sec": 0, 00:19:02.978 "rw_mbytes_per_sec": 0, 00:19:02.978 "r_mbytes_per_sec": 0, 00:19:02.978 "w_mbytes_per_sec": 0 00:19:02.978 }, 00:19:02.978 "claimed": false, 00:19:02.978 "zoned": false, 00:19:02.978 "supported_io_types": { 00:19:02.978 "read": true, 00:19:02.978 "write": true, 00:19:02.978 "unmap": false, 00:19:02.978 "flush": false, 00:19:02.978 "reset": true, 00:19:02.978 "nvme_admin": false, 00:19:02.978 "nvme_io": false, 00:19:02.978 "nvme_io_md": false, 00:19:02.978 "write_zeroes": true, 00:19:02.978 "zcopy": false, 00:19:02.978 "get_zone_info": false, 00:19:02.978 "zone_management": false, 00:19:02.978 "zone_append": false, 00:19:02.978 "compare": false, 00:19:02.978 "compare_and_write": false, 00:19:02.978 "abort": false, 00:19:02.978 "seek_hole": false, 00:19:02.978 "seek_data": false, 00:19:02.978 "copy": false, 00:19:02.978 "nvme_iov_md": false 00:19:02.978 }, 00:19:02.978 "memory_domains": [ 00:19:02.978 { 00:19:02.978 "dma_device_id": "system", 00:19:02.978 "dma_device_type": 1 00:19:02.978 }, 00:19:02.978 { 00:19:02.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.978 "dma_device_type": 2 00:19:02.978 }, 00:19:02.978 { 00:19:02.978 "dma_device_id": "system", 00:19:02.978 "dma_device_type": 1 00:19:02.978 }, 00:19:02.978 { 00:19:02.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.978 "dma_device_type": 2 00:19:02.978 } 00:19:02.978 ], 00:19:02.978 "driver_specific": { 00:19:02.978 "raid": { 00:19:02.978 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:02.978 "strip_size_kb": 0, 00:19:02.978 "state": "online", 00:19:02.978 "raid_level": "raid1", 00:19:02.978 "superblock": true, 00:19:02.978 "num_base_bdevs": 2, 00:19:02.978 "num_base_bdevs_discovered": 2, 00:19:02.978 "num_base_bdevs_operational": 2, 00:19:02.978 "base_bdevs_list": [ 00:19:02.978 { 00:19:02.978 "name": "pt1", 00:19:02.978 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.978 "is_configured": true, 00:19:02.978 "data_offset": 256, 00:19:02.978 "data_size": 7936 00:19:02.978 }, 00:19:02.978 { 00:19:02.978 "name": "pt2", 00:19:02.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.978 "is_configured": true, 00:19:02.978 "data_offset": 256, 00:19:02.978 "data_size": 7936 00:19:02.978 } 00:19:02.978 ] 00:19:02.978 } 00:19:02.978 } 00:19:02.978 }' 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:02.978 pt2' 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.978 14:36:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.978 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:02.978 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:02.978 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.978 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:02.978 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.978 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.978 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.237 [2024-11-20 14:36:04.085585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d ']' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.237 [2024-11-20 14:36:04.133200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.237 [2024-11-20 14:36:04.133367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.237 [2024-11-20 14:36:04.133566] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.237 [2024-11-20 14:36:04.133796] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.237 [2024-11-20 14:36:04.133947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.237 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.237 [2024-11-20 14:36:04.281298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:03.237 [2024-11-20 14:36:04.284005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:03.237 [2024-11-20 14:36:04.284136] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:03.237 [2024-11-20 14:36:04.284226] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:03.237 [2024-11-20 14:36:04.284252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.237 [2024-11-20 14:36:04.284267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:03.237 request: 00:19:03.237 { 00:19:03.237 "name": "raid_bdev1", 00:19:03.237 "raid_level": "raid1", 00:19:03.237 "base_bdevs": [ 00:19:03.237 "malloc1", 00:19:03.237 "malloc2" 00:19:03.237 ], 00:19:03.237 "superblock": false, 00:19:03.237 "method": "bdev_raid_create", 00:19:03.237 "req_id": 1 00:19:03.237 } 00:19:03.238 Got JSON-RPC error response 00:19:03.238 response: 00:19:03.238 { 00:19:03.238 "code": -17, 00:19:03.238 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:03.238 } 00:19:03.238 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:03.238 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:03.238 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.238 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.238 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.497 [2024-11-20 14:36:04.345363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:03.497 [2024-11-20 14:36:04.345448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.497 [2024-11-20 14:36:04.345479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:03.497 [2024-11-20 14:36:04.345511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.497 [2024-11-20 14:36:04.348850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.497 [2024-11-20 14:36:04.348900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:03.497 [2024-11-20 14:36:04.349003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:03.497 [2024-11-20 14:36:04.349121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:03.497 pt1 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.497 "name": "raid_bdev1", 00:19:03.497 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:03.497 "strip_size_kb": 0, 00:19:03.497 "state": "configuring", 00:19:03.497 "raid_level": "raid1", 00:19:03.497 "superblock": true, 00:19:03.497 "num_base_bdevs": 2, 00:19:03.497 "num_base_bdevs_discovered": 1, 00:19:03.497 "num_base_bdevs_operational": 2, 00:19:03.497 "base_bdevs_list": [ 00:19:03.497 { 00:19:03.497 "name": "pt1", 00:19:03.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.497 "is_configured": true, 00:19:03.497 "data_offset": 256, 00:19:03.497 "data_size": 7936 00:19:03.497 }, 00:19:03.497 { 00:19:03.497 "name": null, 00:19:03.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.497 "is_configured": false, 00:19:03.497 "data_offset": 256, 00:19:03.497 "data_size": 7936 00:19:03.497 } 00:19:03.497 ] 00:19:03.497 }' 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.497 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.117 [2024-11-20 14:36:04.897538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:04.117 [2024-11-20 14:36:04.897681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.117 [2024-11-20 14:36:04.897719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:04.117 [2024-11-20 14:36:04.897749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.117 [2024-11-20 14:36:04.898392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.117 [2024-11-20 14:36:04.898433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:04.117 [2024-11-20 14:36:04.898544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:04.117 [2024-11-20 14:36:04.898588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.117 [2024-11-20 14:36:04.898773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:04.117 [2024-11-20 14:36:04.898796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:04.117 [2024-11-20 14:36:04.899107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:04.117 [2024-11-20 14:36:04.899314] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:04.117 [2024-11-20 14:36:04.899330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:04.117 [2024-11-20 14:36:04.899505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.117 pt2 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.117 "name": "raid_bdev1", 00:19:04.117 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:04.117 "strip_size_kb": 0, 00:19:04.117 "state": "online", 00:19:04.117 "raid_level": "raid1", 00:19:04.117 "superblock": true, 00:19:04.117 "num_base_bdevs": 2, 00:19:04.117 "num_base_bdevs_discovered": 2, 00:19:04.117 "num_base_bdevs_operational": 2, 00:19:04.117 "base_bdevs_list": [ 00:19:04.117 { 00:19:04.117 "name": "pt1", 00:19:04.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.117 "is_configured": true, 00:19:04.117 "data_offset": 256, 00:19:04.117 "data_size": 7936 00:19:04.117 }, 00:19:04.117 { 00:19:04.117 "name": "pt2", 00:19:04.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.117 "is_configured": true, 00:19:04.117 "data_offset": 256, 00:19:04.117 "data_size": 7936 00:19:04.117 } 00:19:04.117 ] 00:19:04.117 }' 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.117 14:36:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.375 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:04.375 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:04.375 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:04.375 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:04.375 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:04.375 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.634 [2024-11-20 14:36:05.438077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:04.634 "name": "raid_bdev1", 00:19:04.634 "aliases": [ 00:19:04.634 "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d" 00:19:04.634 ], 00:19:04.634 "product_name": "Raid Volume", 00:19:04.634 "block_size": 4096, 00:19:04.634 "num_blocks": 7936, 00:19:04.634 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:04.634 "assigned_rate_limits": { 00:19:04.634 "rw_ios_per_sec": 0, 00:19:04.634 "rw_mbytes_per_sec": 0, 00:19:04.634 "r_mbytes_per_sec": 0, 00:19:04.634 "w_mbytes_per_sec": 0 00:19:04.634 }, 00:19:04.634 "claimed": false, 00:19:04.634 "zoned": false, 00:19:04.634 "supported_io_types": { 00:19:04.634 "read": true, 00:19:04.634 "write": true, 00:19:04.634 "unmap": false, 00:19:04.634 "flush": false, 00:19:04.634 "reset": true, 00:19:04.634 "nvme_admin": false, 00:19:04.634 "nvme_io": false, 00:19:04.634 "nvme_io_md": false, 00:19:04.634 "write_zeroes": true, 00:19:04.634 "zcopy": false, 00:19:04.634 "get_zone_info": false, 00:19:04.634 "zone_management": false, 00:19:04.634 "zone_append": false, 00:19:04.634 "compare": false, 00:19:04.634 "compare_and_write": false, 00:19:04.634 "abort": false, 00:19:04.634 "seek_hole": false, 00:19:04.634 "seek_data": false, 00:19:04.634 "copy": false, 00:19:04.634 "nvme_iov_md": false 00:19:04.634 }, 00:19:04.634 "memory_domains": [ 00:19:04.634 { 00:19:04.634 "dma_device_id": "system", 00:19:04.634 "dma_device_type": 1 00:19:04.634 }, 00:19:04.634 { 00:19:04.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.634 "dma_device_type": 2 00:19:04.634 }, 00:19:04.634 { 00:19:04.634 "dma_device_id": "system", 00:19:04.634 "dma_device_type": 1 00:19:04.634 }, 00:19:04.634 { 00:19:04.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.634 "dma_device_type": 2 00:19:04.634 } 00:19:04.634 ], 00:19:04.634 "driver_specific": { 00:19:04.634 "raid": { 00:19:04.634 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:04.634 "strip_size_kb": 0, 00:19:04.634 "state": "online", 00:19:04.634 "raid_level": "raid1", 00:19:04.634 "superblock": true, 00:19:04.634 "num_base_bdevs": 2, 00:19:04.634 "num_base_bdevs_discovered": 2, 00:19:04.634 "num_base_bdevs_operational": 2, 00:19:04.634 "base_bdevs_list": [ 00:19:04.634 { 00:19:04.634 "name": "pt1", 00:19:04.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:04.634 "is_configured": true, 00:19:04.634 "data_offset": 256, 00:19:04.634 "data_size": 7936 00:19:04.634 }, 00:19:04.634 { 00:19:04.634 "name": "pt2", 00:19:04.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.634 "is_configured": true, 00:19:04.634 "data_offset": 256, 00:19:04.634 "data_size": 7936 00:19:04.634 } 00:19:04.634 ] 00:19:04.634 } 00:19:04.634 } 00:19:04.634 }' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:04.634 pt2' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.634 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.893 [2024-11-20 14:36:05.698040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d '!=' fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d ']' 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.893 [2024-11-20 14:36:05.745836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.893 "name": "raid_bdev1", 00:19:04.893 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:04.893 "strip_size_kb": 0, 00:19:04.893 "state": "online", 00:19:04.893 "raid_level": "raid1", 00:19:04.893 "superblock": true, 00:19:04.893 "num_base_bdevs": 2, 00:19:04.893 "num_base_bdevs_discovered": 1, 00:19:04.893 "num_base_bdevs_operational": 1, 00:19:04.893 "base_bdevs_list": [ 00:19:04.893 { 00:19:04.893 "name": null, 00:19:04.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.893 "is_configured": false, 00:19:04.893 "data_offset": 0, 00:19:04.893 "data_size": 7936 00:19:04.893 }, 00:19:04.893 { 00:19:04.893 "name": "pt2", 00:19:04.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.893 "is_configured": true, 00:19:04.893 "data_offset": 256, 00:19:04.893 "data_size": 7936 00:19:04.893 } 00:19:04.893 ] 00:19:04.893 }' 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.893 14:36:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.461 [2024-11-20 14:36:06.277948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:05.461 [2024-11-20 14:36:06.278019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.461 [2024-11-20 14:36:06.278133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.461 [2024-11-20 14:36:06.278243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.461 [2024-11-20 14:36:06.278264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.461 [2024-11-20 14:36:06.353920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:05.461 [2024-11-20 14:36:06.354000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.461 [2024-11-20 14:36:06.354071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:05.461 [2024-11-20 14:36:06.354089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.461 [2024-11-20 14:36:06.357101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.461 [2024-11-20 14:36:06.357160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:05.461 [2024-11-20 14:36:06.357254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:05.461 [2024-11-20 14:36:06.357314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:05.461 [2024-11-20 14:36:06.357431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:05.461 [2024-11-20 14:36:06.357452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:05.461 [2024-11-20 14:36:06.357807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:05.461 [2024-11-20 14:36:06.358023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:05.461 [2024-11-20 14:36:06.358071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:05.461 pt2 00:19:05.461 [2024-11-20 14:36:06.358328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.461 "name": "raid_bdev1", 00:19:05.461 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:05.461 "strip_size_kb": 0, 00:19:05.461 "state": "online", 00:19:05.461 "raid_level": "raid1", 00:19:05.461 "superblock": true, 00:19:05.461 "num_base_bdevs": 2, 00:19:05.461 "num_base_bdevs_discovered": 1, 00:19:05.461 "num_base_bdevs_operational": 1, 00:19:05.461 "base_bdevs_list": [ 00:19:05.461 { 00:19:05.461 "name": null, 00:19:05.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.461 "is_configured": false, 00:19:05.461 "data_offset": 256, 00:19:05.461 "data_size": 7936 00:19:05.461 }, 00:19:05.461 { 00:19:05.461 "name": "pt2", 00:19:05.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:05.461 "is_configured": true, 00:19:05.461 "data_offset": 256, 00:19:05.461 "data_size": 7936 00:19:05.461 } 00:19:05.461 ] 00:19:05.461 }' 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.461 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.028 [2024-11-20 14:36:06.894408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.028 [2024-11-20 14:36:06.894451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:06.028 [2024-11-20 14:36:06.894587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.028 [2024-11-20 14:36:06.894696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.028 [2024-11-20 14:36:06.894730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.028 [2024-11-20 14:36:06.958415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:06.028 [2024-11-20 14:36:06.958484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.028 [2024-11-20 14:36:06.958541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:06.028 [2024-11-20 14:36:06.958557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.028 [2024-11-20 14:36:06.961690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.028 [2024-11-20 14:36:06.961746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:06.028 [2024-11-20 14:36:06.961856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:06.028 [2024-11-20 14:36:06.961916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:06.028 [2024-11-20 14:36:06.962129] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:06.028 [2024-11-20 14:36:06.962147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:06.028 [2024-11-20 14:36:06.962167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:06.028 [2024-11-20 14:36:06.962266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:06.028 [2024-11-20 14:36:06.962376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:06.028 [2024-11-20 14:36:06.962392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:06.028 [2024-11-20 14:36:06.962757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:06.028 [2024-11-20 14:36:06.962951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:06.028 [2024-11-20 14:36:06.962974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:06.028 [2024-11-20 14:36:06.963253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:06.028 pt1 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.028 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.029 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.029 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.029 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.029 14:36:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.029 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.029 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.029 14:36:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.029 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.029 "name": "raid_bdev1", 00:19:06.029 "uuid": "fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d", 00:19:06.029 "strip_size_kb": 0, 00:19:06.029 "state": "online", 00:19:06.029 "raid_level": "raid1", 00:19:06.029 "superblock": true, 00:19:06.029 "num_base_bdevs": 2, 00:19:06.029 "num_base_bdevs_discovered": 1, 00:19:06.029 "num_base_bdevs_operational": 1, 00:19:06.029 "base_bdevs_list": [ 00:19:06.029 { 00:19:06.029 "name": null, 00:19:06.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.029 "is_configured": false, 00:19:06.029 "data_offset": 256, 00:19:06.029 "data_size": 7936 00:19:06.029 }, 00:19:06.029 { 00:19:06.029 "name": "pt2", 00:19:06.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:06.029 "is_configured": true, 00:19:06.029 "data_offset": 256, 00:19:06.029 "data_size": 7936 00:19:06.029 } 00:19:06.029 ] 00:19:06.029 }' 00:19:06.029 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.029 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.596 [2024-11-20 14:36:07.550953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d '!=' fba108b6-0332-4a38-a0b0-ef9d5f5a4f2d ']' 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86686 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86686 ']' 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86686 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86686 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86686' 00:19:06.596 killing process with pid 86686 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86686 00:19:06.596 [2024-11-20 14:36:07.626397] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.596 14:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86686 00:19:06.596 [2024-11-20 14:36:07.626502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.596 [2024-11-20 14:36:07.626601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.596 [2024-11-20 14:36:07.626642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:06.855 [2024-11-20 14:36:07.787530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.793 14:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:07.793 00:19:07.793 real 0m6.625s 00:19:07.793 user 0m10.579s 00:19:07.793 sys 0m0.971s 00:19:07.793 14:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.793 ************************************ 00:19:07.793 END TEST raid_superblock_test_4k 00:19:07.793 14:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.793 ************************************ 00:19:07.793 14:36:08 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:07.793 14:36:08 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:07.793 14:36:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:07.793 14:36:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.793 14:36:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.793 ************************************ 00:19:07.793 START TEST raid_rebuild_test_sb_4k 00:19:07.793 ************************************ 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87014 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87014 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87014 ']' 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.793 14:36:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.051 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:08.051 Zero copy mechanism will not be used. 00:19:08.051 [2024-11-20 14:36:08.920615] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:19:08.051 [2024-11-20 14:36:08.920815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87014 ] 00:19:08.310 [2024-11-20 14:36:09.107979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.310 [2024-11-20 14:36:09.225421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.568 [2024-11-20 14:36:09.413089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.568 [2024-11-20 14:36:09.413177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.826 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.826 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:08.826 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:08.826 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:08.826 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.826 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 BaseBdev1_malloc 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 [2024-11-20 14:36:09.920194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:09.084 [2024-11-20 14:36:09.920288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.084 [2024-11-20 14:36:09.920321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:09.084 [2024-11-20 14:36:09.920340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.084 [2024-11-20 14:36:09.923291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.084 [2024-11-20 14:36:09.923355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:09.084 BaseBdev1 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 BaseBdev2_malloc 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 [2024-11-20 14:36:09.976274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:09.084 [2024-11-20 14:36:09.976363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.084 [2024-11-20 14:36:09.976394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:09.084 [2024-11-20 14:36:09.976411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.084 [2024-11-20 14:36:09.979398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.084 [2024-11-20 14:36:09.979444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:09.084 BaseBdev2 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.084 14:36:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 spare_malloc 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 spare_delay 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 [2024-11-20 14:36:10.058106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:09.084 [2024-11-20 14:36:10.058265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.084 [2024-11-20 14:36:10.058297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:09.084 [2024-11-20 14:36:10.058315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.084 [2024-11-20 14:36:10.061185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.084 [2024-11-20 14:36:10.061248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:09.084 spare 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.084 [2024-11-20 14:36:10.066344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.084 [2024-11-20 14:36:10.068859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.084 [2024-11-20 14:36:10.069114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:09.084 [2024-11-20 14:36:10.069137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:09.084 [2024-11-20 14:36:10.069401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:09.084 [2024-11-20 14:36:10.069604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:09.084 [2024-11-20 14:36:10.069619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:09.084 [2024-11-20 14:36:10.069875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.084 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.085 "name": "raid_bdev1", 00:19:09.085 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:09.085 "strip_size_kb": 0, 00:19:09.085 "state": "online", 00:19:09.085 "raid_level": "raid1", 00:19:09.085 "superblock": true, 00:19:09.085 "num_base_bdevs": 2, 00:19:09.085 "num_base_bdevs_discovered": 2, 00:19:09.085 "num_base_bdevs_operational": 2, 00:19:09.085 "base_bdevs_list": [ 00:19:09.085 { 00:19:09.085 "name": "BaseBdev1", 00:19:09.085 "uuid": "e0f2d2e1-205e-52f5-bb96-969edd1a8a10", 00:19:09.085 "is_configured": true, 00:19:09.085 "data_offset": 256, 00:19:09.085 "data_size": 7936 00:19:09.085 }, 00:19:09.085 { 00:19:09.085 "name": "BaseBdev2", 00:19:09.085 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:09.085 "is_configured": true, 00:19:09.085 "data_offset": 256, 00:19:09.085 "data_size": 7936 00:19:09.085 } 00:19:09.085 ] 00:19:09.085 }' 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.085 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.651 [2024-11-20 14:36:10.594906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.651 14:36:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:10.218 [2024-11-20 14:36:10.986705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:10.218 /dev/nbd0 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.218 1+0 records in 00:19:10.218 1+0 records out 00:19:10.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361144 s, 11.3 MB/s 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:10.218 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:11.153 7936+0 records in 00:19:11.153 7936+0 records out 00:19:11.153 32505856 bytes (33 MB, 31 MiB) copied, 0.848299 s, 38.3 MB/s 00:19:11.153 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:11.153 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:11.153 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:11.153 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.153 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:11.153 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.153 14:36:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.153 [2024-11-20 14:36:12.171393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.153 [2024-11-20 14:36:12.183483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.153 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.412 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.412 "name": "raid_bdev1", 00:19:11.412 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:11.412 "strip_size_kb": 0, 00:19:11.412 "state": "online", 00:19:11.412 "raid_level": "raid1", 00:19:11.412 "superblock": true, 00:19:11.412 "num_base_bdevs": 2, 00:19:11.412 "num_base_bdevs_discovered": 1, 00:19:11.412 "num_base_bdevs_operational": 1, 00:19:11.412 "base_bdevs_list": [ 00:19:11.412 { 00:19:11.412 "name": null, 00:19:11.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.412 "is_configured": false, 00:19:11.412 "data_offset": 0, 00:19:11.412 "data_size": 7936 00:19:11.412 }, 00:19:11.412 { 00:19:11.412 "name": "BaseBdev2", 00:19:11.412 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:11.412 "is_configured": true, 00:19:11.412 "data_offset": 256, 00:19:11.412 "data_size": 7936 00:19:11.412 } 00:19:11.412 ] 00:19:11.412 }' 00:19:11.412 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.412 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.670 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.670 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.670 [2024-11-20 14:36:12.679713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.670 [2024-11-20 14:36:12.696132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:11.670 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.670 14:36:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:11.670 [2024-11-20 14:36:12.698955] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.047 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.047 "name": "raid_bdev1", 00:19:13.047 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:13.047 "strip_size_kb": 0, 00:19:13.048 "state": "online", 00:19:13.048 "raid_level": "raid1", 00:19:13.048 "superblock": true, 00:19:13.048 "num_base_bdevs": 2, 00:19:13.048 "num_base_bdevs_discovered": 2, 00:19:13.048 "num_base_bdevs_operational": 2, 00:19:13.048 "process": { 00:19:13.048 "type": "rebuild", 00:19:13.048 "target": "spare", 00:19:13.048 "progress": { 00:19:13.048 "blocks": 2560, 00:19:13.048 "percent": 32 00:19:13.048 } 00:19:13.048 }, 00:19:13.048 "base_bdevs_list": [ 00:19:13.048 { 00:19:13.048 "name": "spare", 00:19:13.048 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:13.048 "is_configured": true, 00:19:13.048 "data_offset": 256, 00:19:13.048 "data_size": 7936 00:19:13.048 }, 00:19:13.048 { 00:19:13.048 "name": "BaseBdev2", 00:19:13.048 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:13.048 "is_configured": true, 00:19:13.048 "data_offset": 256, 00:19:13.048 "data_size": 7936 00:19:13.048 } 00:19:13.048 ] 00:19:13.048 }' 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.048 [2024-11-20 14:36:13.864477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.048 [2024-11-20 14:36:13.907661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:13.048 [2024-11-20 14:36:13.907778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.048 [2024-11-20 14:36:13.907802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:13.048 [2024-11-20 14:36:13.907817] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.048 "name": "raid_bdev1", 00:19:13.048 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:13.048 "strip_size_kb": 0, 00:19:13.048 "state": "online", 00:19:13.048 "raid_level": "raid1", 00:19:13.048 "superblock": true, 00:19:13.048 "num_base_bdevs": 2, 00:19:13.048 "num_base_bdevs_discovered": 1, 00:19:13.048 "num_base_bdevs_operational": 1, 00:19:13.048 "base_bdevs_list": [ 00:19:13.048 { 00:19:13.048 "name": null, 00:19:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.048 "is_configured": false, 00:19:13.048 "data_offset": 0, 00:19:13.048 "data_size": 7936 00:19:13.048 }, 00:19:13.048 { 00:19:13.048 "name": "BaseBdev2", 00:19:13.048 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:13.048 "is_configured": true, 00:19:13.048 "data_offset": 256, 00:19:13.048 "data_size": 7936 00:19:13.048 } 00:19:13.048 ] 00:19:13.048 }' 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.048 14:36:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.614 "name": "raid_bdev1", 00:19:13.614 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:13.614 "strip_size_kb": 0, 00:19:13.614 "state": "online", 00:19:13.614 "raid_level": "raid1", 00:19:13.614 "superblock": true, 00:19:13.614 "num_base_bdevs": 2, 00:19:13.614 "num_base_bdevs_discovered": 1, 00:19:13.614 "num_base_bdevs_operational": 1, 00:19:13.614 "base_bdevs_list": [ 00:19:13.614 { 00:19:13.614 "name": null, 00:19:13.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.614 "is_configured": false, 00:19:13.614 "data_offset": 0, 00:19:13.614 "data_size": 7936 00:19:13.614 }, 00:19:13.614 { 00:19:13.614 "name": "BaseBdev2", 00:19:13.614 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:13.614 "is_configured": true, 00:19:13.614 "data_offset": 256, 00:19:13.614 "data_size": 7936 00:19:13.614 } 00:19:13.614 ] 00:19:13.614 }' 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.614 [2024-11-20 14:36:14.622119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:13.614 [2024-11-20 14:36:14.637564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.614 14:36:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:13.614 [2024-11-20 14:36:14.640364] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.991 "name": "raid_bdev1", 00:19:14.991 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:14.991 "strip_size_kb": 0, 00:19:14.991 "state": "online", 00:19:14.991 "raid_level": "raid1", 00:19:14.991 "superblock": true, 00:19:14.991 "num_base_bdevs": 2, 00:19:14.991 "num_base_bdevs_discovered": 2, 00:19:14.991 "num_base_bdevs_operational": 2, 00:19:14.991 "process": { 00:19:14.991 "type": "rebuild", 00:19:14.991 "target": "spare", 00:19:14.991 "progress": { 00:19:14.991 "blocks": 2560, 00:19:14.991 "percent": 32 00:19:14.991 } 00:19:14.991 }, 00:19:14.991 "base_bdevs_list": [ 00:19:14.991 { 00:19:14.991 "name": "spare", 00:19:14.991 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:14.991 "is_configured": true, 00:19:14.991 "data_offset": 256, 00:19:14.991 "data_size": 7936 00:19:14.991 }, 00:19:14.991 { 00:19:14.991 "name": "BaseBdev2", 00:19:14.991 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:14.991 "is_configured": true, 00:19:14.991 "data_offset": 256, 00:19:14.991 "data_size": 7936 00:19:14.991 } 00:19:14.991 ] 00:19:14.991 }' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:14.991 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=737 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.991 "name": "raid_bdev1", 00:19:14.991 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:14.991 "strip_size_kb": 0, 00:19:14.991 "state": "online", 00:19:14.991 "raid_level": "raid1", 00:19:14.991 "superblock": true, 00:19:14.991 "num_base_bdevs": 2, 00:19:14.991 "num_base_bdevs_discovered": 2, 00:19:14.991 "num_base_bdevs_operational": 2, 00:19:14.991 "process": { 00:19:14.991 "type": "rebuild", 00:19:14.991 "target": "spare", 00:19:14.991 "progress": { 00:19:14.991 "blocks": 2816, 00:19:14.991 "percent": 35 00:19:14.991 } 00:19:14.991 }, 00:19:14.991 "base_bdevs_list": [ 00:19:14.991 { 00:19:14.991 "name": "spare", 00:19:14.991 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:14.991 "is_configured": true, 00:19:14.991 "data_offset": 256, 00:19:14.991 "data_size": 7936 00:19:14.991 }, 00:19:14.991 { 00:19:14.991 "name": "BaseBdev2", 00:19:14.991 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:14.991 "is_configured": true, 00:19:14.991 "data_offset": 256, 00:19:14.991 "data_size": 7936 00:19:14.991 } 00:19:14.991 ] 00:19:14.991 }' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.991 14:36:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.926 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.927 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:16.186 14:36:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.186 14:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.186 "name": "raid_bdev1", 00:19:16.186 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:16.186 "strip_size_kb": 0, 00:19:16.186 "state": "online", 00:19:16.186 "raid_level": "raid1", 00:19:16.186 "superblock": true, 00:19:16.186 "num_base_bdevs": 2, 00:19:16.186 "num_base_bdevs_discovered": 2, 00:19:16.186 "num_base_bdevs_operational": 2, 00:19:16.186 "process": { 00:19:16.186 "type": "rebuild", 00:19:16.186 "target": "spare", 00:19:16.186 "progress": { 00:19:16.186 "blocks": 5888, 00:19:16.186 "percent": 74 00:19:16.186 } 00:19:16.186 }, 00:19:16.186 "base_bdevs_list": [ 00:19:16.186 { 00:19:16.186 "name": "spare", 00:19:16.186 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:16.186 "is_configured": true, 00:19:16.187 "data_offset": 256, 00:19:16.187 "data_size": 7936 00:19:16.187 }, 00:19:16.187 { 00:19:16.187 "name": "BaseBdev2", 00:19:16.187 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:16.187 "is_configured": true, 00:19:16.187 "data_offset": 256, 00:19:16.187 "data_size": 7936 00:19:16.187 } 00:19:16.187 ] 00:19:16.187 }' 00:19:16.187 14:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.187 14:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.187 14:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.187 14:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.187 14:36:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.757 [2024-11-20 14:36:17.762886] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:16.757 [2024-11-20 14:36:17.763000] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:16.757 [2024-11-20 14:36:17.763167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.324 "name": "raid_bdev1", 00:19:17.324 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:17.324 "strip_size_kb": 0, 00:19:17.324 "state": "online", 00:19:17.324 "raid_level": "raid1", 00:19:17.324 "superblock": true, 00:19:17.324 "num_base_bdevs": 2, 00:19:17.324 "num_base_bdevs_discovered": 2, 00:19:17.324 "num_base_bdevs_operational": 2, 00:19:17.324 "base_bdevs_list": [ 00:19:17.324 { 00:19:17.324 "name": "spare", 00:19:17.324 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:17.324 "is_configured": true, 00:19:17.324 "data_offset": 256, 00:19:17.324 "data_size": 7936 00:19:17.324 }, 00:19:17.324 { 00:19:17.324 "name": "BaseBdev2", 00:19:17.324 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:17.324 "is_configured": true, 00:19:17.324 "data_offset": 256, 00:19:17.324 "data_size": 7936 00:19:17.324 } 00:19:17.324 ] 00:19:17.324 }' 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.324 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.324 "name": "raid_bdev1", 00:19:17.324 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:17.324 "strip_size_kb": 0, 00:19:17.324 "state": "online", 00:19:17.324 "raid_level": "raid1", 00:19:17.324 "superblock": true, 00:19:17.324 "num_base_bdevs": 2, 00:19:17.324 "num_base_bdevs_discovered": 2, 00:19:17.324 "num_base_bdevs_operational": 2, 00:19:17.324 "base_bdevs_list": [ 00:19:17.324 { 00:19:17.324 "name": "spare", 00:19:17.324 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:17.324 "is_configured": true, 00:19:17.325 "data_offset": 256, 00:19:17.325 "data_size": 7936 00:19:17.325 }, 00:19:17.325 { 00:19:17.325 "name": "BaseBdev2", 00:19:17.325 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:17.325 "is_configured": true, 00:19:17.325 "data_offset": 256, 00:19:17.325 "data_size": 7936 00:19:17.325 } 00:19:17.325 ] 00:19:17.325 }' 00:19:17.325 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.584 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.584 "name": "raid_bdev1", 00:19:17.584 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:17.584 "strip_size_kb": 0, 00:19:17.584 "state": "online", 00:19:17.585 "raid_level": "raid1", 00:19:17.585 "superblock": true, 00:19:17.585 "num_base_bdevs": 2, 00:19:17.585 "num_base_bdevs_discovered": 2, 00:19:17.585 "num_base_bdevs_operational": 2, 00:19:17.585 "base_bdevs_list": [ 00:19:17.585 { 00:19:17.585 "name": "spare", 00:19:17.585 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:17.585 "is_configured": true, 00:19:17.585 "data_offset": 256, 00:19:17.585 "data_size": 7936 00:19:17.585 }, 00:19:17.585 { 00:19:17.585 "name": "BaseBdev2", 00:19:17.585 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:17.585 "is_configured": true, 00:19:17.585 "data_offset": 256, 00:19:17.585 "data_size": 7936 00:19:17.585 } 00:19:17.585 ] 00:19:17.585 }' 00:19:17.585 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.585 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.152 [2024-11-20 14:36:18.973531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.152 [2024-11-20 14:36:18.973569] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.152 [2024-11-20 14:36:18.973699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.152 [2024-11-20 14:36:18.973809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:18.152 [2024-11-20 14:36:18.973831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.152 14:36:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:18.152 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:18.410 /dev/nbd0 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:18.410 1+0 records in 00:19:18.410 1+0 records out 00:19:18.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00353449 s, 1.2 MB/s 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:18.410 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:18.669 /dev/nbd1 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:18.669 1+0 records in 00:19:18.669 1+0 records out 00:19:18.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396165 s, 10.3 MB/s 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:18.669 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:18.927 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:18.927 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:18.927 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:18.927 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:18.927 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:18.927 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:18.927 14:36:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:19.185 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.443 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.443 [2024-11-20 14:36:20.463507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:19.444 [2024-11-20 14:36:20.463601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.444 [2024-11-20 14:36:20.463654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:19.444 [2024-11-20 14:36:20.463701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.444 [2024-11-20 14:36:20.466863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.444 [2024-11-20 14:36:20.466926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:19.444 [2024-11-20 14:36:20.467126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:19.444 [2024-11-20 14:36:20.467219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:19.444 [2024-11-20 14:36:20.467440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:19.444 spare 00:19:19.444 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.444 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:19.444 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.444 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 [2024-11-20 14:36:20.567584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:19.728 [2024-11-20 14:36:20.567668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:19.728 [2024-11-20 14:36:20.568084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:19.728 [2024-11-20 14:36:20.568375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:19.728 [2024-11-20 14:36:20.568403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:19.728 [2024-11-20 14:36:20.568679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.728 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.728 "name": "raid_bdev1", 00:19:19.728 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:19.728 "strip_size_kb": 0, 00:19:19.728 "state": "online", 00:19:19.728 "raid_level": "raid1", 00:19:19.728 "superblock": true, 00:19:19.728 "num_base_bdevs": 2, 00:19:19.728 "num_base_bdevs_discovered": 2, 00:19:19.728 "num_base_bdevs_operational": 2, 00:19:19.728 "base_bdevs_list": [ 00:19:19.729 { 00:19:19.729 "name": "spare", 00:19:19.729 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:19.729 "is_configured": true, 00:19:19.729 "data_offset": 256, 00:19:19.729 "data_size": 7936 00:19:19.729 }, 00:19:19.729 { 00:19:19.729 "name": "BaseBdev2", 00:19:19.729 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:19.729 "is_configured": true, 00:19:19.729 "data_offset": 256, 00:19:19.729 "data_size": 7936 00:19:19.729 } 00:19:19.729 ] 00:19:19.729 }' 00:19:19.729 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.729 14:36:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:20.314 "name": "raid_bdev1", 00:19:20.314 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:20.314 "strip_size_kb": 0, 00:19:20.314 "state": "online", 00:19:20.314 "raid_level": "raid1", 00:19:20.314 "superblock": true, 00:19:20.314 "num_base_bdevs": 2, 00:19:20.314 "num_base_bdevs_discovered": 2, 00:19:20.314 "num_base_bdevs_operational": 2, 00:19:20.314 "base_bdevs_list": [ 00:19:20.314 { 00:19:20.314 "name": "spare", 00:19:20.314 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:20.314 "is_configured": true, 00:19:20.314 "data_offset": 256, 00:19:20.314 "data_size": 7936 00:19:20.314 }, 00:19:20.314 { 00:19:20.314 "name": "BaseBdev2", 00:19:20.314 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:20.314 "is_configured": true, 00:19:20.314 "data_offset": 256, 00:19:20.314 "data_size": 7936 00:19:20.314 } 00:19:20.314 ] 00:19:20.314 }' 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.314 [2024-11-20 14:36:21.296893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.314 "name": "raid_bdev1", 00:19:20.314 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:20.314 "strip_size_kb": 0, 00:19:20.314 "state": "online", 00:19:20.314 "raid_level": "raid1", 00:19:20.314 "superblock": true, 00:19:20.314 "num_base_bdevs": 2, 00:19:20.314 "num_base_bdevs_discovered": 1, 00:19:20.314 "num_base_bdevs_operational": 1, 00:19:20.314 "base_bdevs_list": [ 00:19:20.314 { 00:19:20.314 "name": null, 00:19:20.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.314 "is_configured": false, 00:19:20.314 "data_offset": 0, 00:19:20.314 "data_size": 7936 00:19:20.314 }, 00:19:20.314 { 00:19:20.314 "name": "BaseBdev2", 00:19:20.314 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:20.314 "is_configured": true, 00:19:20.314 "data_offset": 256, 00:19:20.314 "data_size": 7936 00:19:20.314 } 00:19:20.314 ] 00:19:20.314 }' 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.314 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.881 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:20.881 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.881 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.881 [2024-11-20 14:36:21.841162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.881 [2024-11-20 14:36:21.841468] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:20.881 [2024-11-20 14:36:21.841511] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:20.881 [2024-11-20 14:36:21.841575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.881 [2024-11-20 14:36:21.857743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:20.881 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.881 14:36:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:20.881 [2024-11-20 14:36:21.860422] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.814 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.073 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.073 "name": "raid_bdev1", 00:19:22.073 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:22.073 "strip_size_kb": 0, 00:19:22.073 "state": "online", 00:19:22.073 "raid_level": "raid1", 00:19:22.073 "superblock": true, 00:19:22.073 "num_base_bdevs": 2, 00:19:22.073 "num_base_bdevs_discovered": 2, 00:19:22.073 "num_base_bdevs_operational": 2, 00:19:22.073 "process": { 00:19:22.073 "type": "rebuild", 00:19:22.073 "target": "spare", 00:19:22.073 "progress": { 00:19:22.073 "blocks": 2560, 00:19:22.073 "percent": 32 00:19:22.073 } 00:19:22.073 }, 00:19:22.073 "base_bdevs_list": [ 00:19:22.073 { 00:19:22.073 "name": "spare", 00:19:22.073 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:22.073 "is_configured": true, 00:19:22.073 "data_offset": 256, 00:19:22.073 "data_size": 7936 00:19:22.073 }, 00:19:22.073 { 00:19:22.073 "name": "BaseBdev2", 00:19:22.073 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:22.073 "is_configured": true, 00:19:22.073 "data_offset": 256, 00:19:22.073 "data_size": 7936 00:19:22.073 } 00:19:22.073 ] 00:19:22.073 }' 00:19:22.073 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.073 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.073 14:36:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 [2024-11-20 14:36:23.029684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.073 [2024-11-20 14:36:23.068703] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.073 [2024-11-20 14:36:23.068812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.073 [2024-11-20 14:36:23.068836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.073 [2024-11-20 14:36:23.068850] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.330 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.330 "name": "raid_bdev1", 00:19:22.330 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:22.330 "strip_size_kb": 0, 00:19:22.330 "state": "online", 00:19:22.330 "raid_level": "raid1", 00:19:22.330 "superblock": true, 00:19:22.330 "num_base_bdevs": 2, 00:19:22.330 "num_base_bdevs_discovered": 1, 00:19:22.330 "num_base_bdevs_operational": 1, 00:19:22.330 "base_bdevs_list": [ 00:19:22.330 { 00:19:22.330 "name": null, 00:19:22.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.330 "is_configured": false, 00:19:22.330 "data_offset": 0, 00:19:22.330 "data_size": 7936 00:19:22.330 }, 00:19:22.330 { 00:19:22.330 "name": "BaseBdev2", 00:19:22.330 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:22.330 "is_configured": true, 00:19:22.330 "data_offset": 256, 00:19:22.330 "data_size": 7936 00:19:22.330 } 00:19:22.330 ] 00:19:22.330 }' 00:19:22.330 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.330 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.588 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:22.588 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.588 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.588 [2024-11-20 14:36:23.626502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:22.588 [2024-11-20 14:36:23.626690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.588 [2024-11-20 14:36:23.626728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:22.588 [2024-11-20 14:36:23.626747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.588 [2024-11-20 14:36:23.627424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.588 [2024-11-20 14:36:23.627503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:22.588 [2024-11-20 14:36:23.627719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:22.588 [2024-11-20 14:36:23.627750] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.588 [2024-11-20 14:36:23.627764] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:22.588 [2024-11-20 14:36:23.627814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.846 [2024-11-20 14:36:23.643934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:22.846 spare 00:19:22.846 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.846 14:36:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:22.846 [2024-11-20 14:36:23.646781] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.782 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.782 "name": "raid_bdev1", 00:19:23.782 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:23.782 "strip_size_kb": 0, 00:19:23.782 "state": "online", 00:19:23.782 "raid_level": "raid1", 00:19:23.782 "superblock": true, 00:19:23.782 "num_base_bdevs": 2, 00:19:23.782 "num_base_bdevs_discovered": 2, 00:19:23.782 "num_base_bdevs_operational": 2, 00:19:23.782 "process": { 00:19:23.782 "type": "rebuild", 00:19:23.783 "target": "spare", 00:19:23.783 "progress": { 00:19:23.783 "blocks": 2560, 00:19:23.783 "percent": 32 00:19:23.783 } 00:19:23.783 }, 00:19:23.783 "base_bdevs_list": [ 00:19:23.783 { 00:19:23.783 "name": "spare", 00:19:23.783 "uuid": "82579206-9d74-5c98-9b9b-5a1eef5c626c", 00:19:23.783 "is_configured": true, 00:19:23.783 "data_offset": 256, 00:19:23.783 "data_size": 7936 00:19:23.783 }, 00:19:23.783 { 00:19:23.783 "name": "BaseBdev2", 00:19:23.783 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:23.783 "is_configured": true, 00:19:23.783 "data_offset": 256, 00:19:23.783 "data_size": 7936 00:19:23.783 } 00:19:23.783 ] 00:19:23.783 }' 00:19:23.783 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.783 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:23.783 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.783 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:23.783 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:23.783 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.783 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.783 [2024-11-20 14:36:24.811756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.041 [2024-11-20 14:36:24.855678] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:24.041 [2024-11-20 14:36:24.855811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.041 [2024-11-20 14:36:24.855840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.041 [2024-11-20 14:36:24.855852] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.041 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.041 "name": "raid_bdev1", 00:19:24.042 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:24.042 "strip_size_kb": 0, 00:19:24.042 "state": "online", 00:19:24.042 "raid_level": "raid1", 00:19:24.042 "superblock": true, 00:19:24.042 "num_base_bdevs": 2, 00:19:24.042 "num_base_bdevs_discovered": 1, 00:19:24.042 "num_base_bdevs_operational": 1, 00:19:24.042 "base_bdevs_list": [ 00:19:24.042 { 00:19:24.042 "name": null, 00:19:24.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.042 "is_configured": false, 00:19:24.042 "data_offset": 0, 00:19:24.042 "data_size": 7936 00:19:24.042 }, 00:19:24.042 { 00:19:24.042 "name": "BaseBdev2", 00:19:24.042 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:24.042 "is_configured": true, 00:19:24.042 "data_offset": 256, 00:19:24.042 "data_size": 7936 00:19:24.042 } 00:19:24.042 ] 00:19:24.042 }' 00:19:24.042 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.042 14:36:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.608 "name": "raid_bdev1", 00:19:24.608 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:24.608 "strip_size_kb": 0, 00:19:24.608 "state": "online", 00:19:24.608 "raid_level": "raid1", 00:19:24.608 "superblock": true, 00:19:24.608 "num_base_bdevs": 2, 00:19:24.608 "num_base_bdevs_discovered": 1, 00:19:24.608 "num_base_bdevs_operational": 1, 00:19:24.608 "base_bdevs_list": [ 00:19:24.608 { 00:19:24.608 "name": null, 00:19:24.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.608 "is_configured": false, 00:19:24.608 "data_offset": 0, 00:19:24.608 "data_size": 7936 00:19:24.608 }, 00:19:24.608 { 00:19:24.608 "name": "BaseBdev2", 00:19:24.608 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:24.608 "is_configured": true, 00:19:24.608 "data_offset": 256, 00:19:24.608 "data_size": 7936 00:19:24.608 } 00:19:24.608 ] 00:19:24.608 }' 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.608 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:24.609 [2024-11-20 14:36:25.566017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:24.609 [2024-11-20 14:36:25.566554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.609 [2024-11-20 14:36:25.566743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:24.609 [2024-11-20 14:36:25.566784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.609 [2024-11-20 14:36:25.567419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.609 [2024-11-20 14:36:25.567472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:24.609 [2024-11-20 14:36:25.567577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:24.609 [2024-11-20 14:36:25.567600] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:24.609 [2024-11-20 14:36:25.567616] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:24.609 [2024-11-20 14:36:25.567649] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:24.609 BaseBdev1 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.609 14:36:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.546 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.803 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.803 "name": "raid_bdev1", 00:19:25.803 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:25.803 "strip_size_kb": 0, 00:19:25.803 "state": "online", 00:19:25.803 "raid_level": "raid1", 00:19:25.803 "superblock": true, 00:19:25.803 "num_base_bdevs": 2, 00:19:25.803 "num_base_bdevs_discovered": 1, 00:19:25.803 "num_base_bdevs_operational": 1, 00:19:25.803 "base_bdevs_list": [ 00:19:25.804 { 00:19:25.804 "name": null, 00:19:25.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.804 "is_configured": false, 00:19:25.804 "data_offset": 0, 00:19:25.804 "data_size": 7936 00:19:25.804 }, 00:19:25.804 { 00:19:25.804 "name": "BaseBdev2", 00:19:25.804 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:25.804 "is_configured": true, 00:19:25.804 "data_offset": 256, 00:19:25.804 "data_size": 7936 00:19:25.804 } 00:19:25.804 ] 00:19:25.804 }' 00:19:25.804 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.804 14:36:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.062 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.321 "name": "raid_bdev1", 00:19:26.321 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:26.321 "strip_size_kb": 0, 00:19:26.321 "state": "online", 00:19:26.321 "raid_level": "raid1", 00:19:26.321 "superblock": true, 00:19:26.321 "num_base_bdevs": 2, 00:19:26.321 "num_base_bdevs_discovered": 1, 00:19:26.321 "num_base_bdevs_operational": 1, 00:19:26.321 "base_bdevs_list": [ 00:19:26.321 { 00:19:26.321 "name": null, 00:19:26.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.321 "is_configured": false, 00:19:26.321 "data_offset": 0, 00:19:26.321 "data_size": 7936 00:19:26.321 }, 00:19:26.321 { 00:19:26.321 "name": "BaseBdev2", 00:19:26.321 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:26.321 "is_configured": true, 00:19:26.321 "data_offset": 256, 00:19:26.321 "data_size": 7936 00:19:26.321 } 00:19:26.321 ] 00:19:26.321 }' 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:26.321 [2024-11-20 14:36:27.246613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.321 [2024-11-20 14:36:27.246915] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:26.321 [2024-11-20 14:36:27.246942] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:26.321 request: 00:19:26.321 { 00:19:26.321 "base_bdev": "BaseBdev1", 00:19:26.321 "raid_bdev": "raid_bdev1", 00:19:26.321 "method": "bdev_raid_add_base_bdev", 00:19:26.321 "req_id": 1 00:19:26.321 } 00:19:26.321 Got JSON-RPC error response 00:19:26.321 response: 00:19:26.321 { 00:19:26.321 "code": -22, 00:19:26.321 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:26.321 } 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.321 14:36:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.256 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.515 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.515 "name": "raid_bdev1", 00:19:27.515 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:27.515 "strip_size_kb": 0, 00:19:27.515 "state": "online", 00:19:27.515 "raid_level": "raid1", 00:19:27.515 "superblock": true, 00:19:27.515 "num_base_bdevs": 2, 00:19:27.515 "num_base_bdevs_discovered": 1, 00:19:27.515 "num_base_bdevs_operational": 1, 00:19:27.515 "base_bdevs_list": [ 00:19:27.515 { 00:19:27.515 "name": null, 00:19:27.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.515 "is_configured": false, 00:19:27.515 "data_offset": 0, 00:19:27.515 "data_size": 7936 00:19:27.515 }, 00:19:27.515 { 00:19:27.515 "name": "BaseBdev2", 00:19:27.515 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:27.515 "is_configured": true, 00:19:27.515 "data_offset": 256, 00:19:27.515 "data_size": 7936 00:19:27.515 } 00:19:27.515 ] 00:19:27.515 }' 00:19:27.515 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.515 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.774 "name": "raid_bdev1", 00:19:27.774 "uuid": "602a0c38-744c-43fc-a0a8-ec827cf0158d", 00:19:27.774 "strip_size_kb": 0, 00:19:27.774 "state": "online", 00:19:27.774 "raid_level": "raid1", 00:19:27.774 "superblock": true, 00:19:27.774 "num_base_bdevs": 2, 00:19:27.774 "num_base_bdevs_discovered": 1, 00:19:27.774 "num_base_bdevs_operational": 1, 00:19:27.774 "base_bdevs_list": [ 00:19:27.774 { 00:19:27.774 "name": null, 00:19:27.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.774 "is_configured": false, 00:19:27.774 "data_offset": 0, 00:19:27.774 "data_size": 7936 00:19:27.774 }, 00:19:27.774 { 00:19:27.774 "name": "BaseBdev2", 00:19:27.774 "uuid": "fbc13d6b-a9ec-5441-b9ce-7bddaddb6e77", 00:19:27.774 "is_configured": true, 00:19:27.774 "data_offset": 256, 00:19:27.774 "data_size": 7936 00:19:27.774 } 00:19:27.774 ] 00:19:27.774 }' 00:19:27.774 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87014 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87014 ']' 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87014 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87014 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.033 killing process with pid 87014 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87014' 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87014 00:19:28.033 Received shutdown signal, test time was about 60.000000 seconds 00:19:28.033 00:19:28.033 Latency(us) 00:19:28.033 [2024-11-20T14:36:29.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.033 [2024-11-20T14:36:29.090Z] =================================================================================================================== 00:19:28.033 [2024-11-20T14:36:29.090Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.033 [2024-11-20 14:36:28.951625] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:28.033 14:36:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87014 00:19:28.033 [2024-11-20 14:36:28.951823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.033 [2024-11-20 14:36:28.951895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.033 [2024-11-20 14:36:28.951922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:28.291 [2024-11-20 14:36:29.214961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:29.231 14:36:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:29.231 00:19:29.231 real 0m21.457s 00:19:29.231 user 0m29.019s 00:19:29.231 sys 0m2.528s 00:19:29.231 14:36:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.231 14:36:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:29.231 ************************************ 00:19:29.231 END TEST raid_rebuild_test_sb_4k 00:19:29.231 ************************************ 00:19:29.489 14:36:30 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:29.489 14:36:30 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:29.489 14:36:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:29.489 14:36:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.489 14:36:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:29.489 ************************************ 00:19:29.489 START TEST raid_state_function_test_sb_md_separate 00:19:29.489 ************************************ 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87717 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:29.489 Process raid pid: 87717 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87717' 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87717 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87717 ']' 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.489 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.490 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.490 14:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.490 [2024-11-20 14:36:30.430611] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:19:29.490 [2024-11-20 14:36:30.430805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.748 [2024-11-20 14:36:30.618050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.748 [2024-11-20 14:36:30.756699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.007 [2024-11-20 14:36:30.969843] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.007 [2024-11-20 14:36:30.969897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.572 [2024-11-20 14:36:31.430136] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.572 [2024-11-20 14:36:31.430213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.572 [2024-11-20 14:36:31.430232] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.572 [2024-11-20 14:36:31.430249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.572 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.572 "name": "Existed_Raid", 00:19:30.572 "uuid": "65e5eedf-79eb-4d08-8797-14ac229fda79", 00:19:30.572 "strip_size_kb": 0, 00:19:30.572 "state": "configuring", 00:19:30.572 "raid_level": "raid1", 00:19:30.572 "superblock": true, 00:19:30.572 "num_base_bdevs": 2, 00:19:30.572 "num_base_bdevs_discovered": 0, 00:19:30.572 "num_base_bdevs_operational": 2, 00:19:30.572 "base_bdevs_list": [ 00:19:30.572 { 00:19:30.572 "name": "BaseBdev1", 00:19:30.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.572 "is_configured": false, 00:19:30.573 "data_offset": 0, 00:19:30.573 "data_size": 0 00:19:30.573 }, 00:19:30.573 { 00:19:30.573 "name": "BaseBdev2", 00:19:30.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.573 "is_configured": false, 00:19:30.573 "data_offset": 0, 00:19:30.573 "data_size": 0 00:19:30.573 } 00:19:30.573 ] 00:19:30.573 }' 00:19:30.573 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.573 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.140 [2024-11-20 14:36:31.922191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.140 [2024-11-20 14:36:31.922250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.140 [2024-11-20 14:36:31.930212] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:31.140 [2024-11-20 14:36:31.930261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:31.140 [2024-11-20 14:36:31.930277] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.140 [2024-11-20 14:36:31.930298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.140 [2024-11-20 14:36:31.976989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.140 BaseBdev1 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.140 14:36:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.140 [ 00:19:31.140 { 00:19:31.140 "name": "BaseBdev1", 00:19:31.140 "aliases": [ 00:19:31.140 "c688019c-d5ad-4f34-b696-5e85ab127ca2" 00:19:31.140 ], 00:19:31.140 "product_name": "Malloc disk", 00:19:31.140 "block_size": 4096, 00:19:31.140 "num_blocks": 8192, 00:19:31.140 "uuid": "c688019c-d5ad-4f34-b696-5e85ab127ca2", 00:19:31.140 "md_size": 32, 00:19:31.140 "md_interleave": false, 00:19:31.140 "dif_type": 0, 00:19:31.140 "assigned_rate_limits": { 00:19:31.140 "rw_ios_per_sec": 0, 00:19:31.140 "rw_mbytes_per_sec": 0, 00:19:31.140 "r_mbytes_per_sec": 0, 00:19:31.140 "w_mbytes_per_sec": 0 00:19:31.140 }, 00:19:31.140 "claimed": true, 00:19:31.140 "claim_type": "exclusive_write", 00:19:31.140 "zoned": false, 00:19:31.140 "supported_io_types": { 00:19:31.140 "read": true, 00:19:31.140 "write": true, 00:19:31.140 "unmap": true, 00:19:31.140 "flush": true, 00:19:31.141 "reset": true, 00:19:31.141 "nvme_admin": false, 00:19:31.141 "nvme_io": false, 00:19:31.141 "nvme_io_md": false, 00:19:31.141 "write_zeroes": true, 00:19:31.141 "zcopy": true, 00:19:31.141 "get_zone_info": false, 00:19:31.141 "zone_management": false, 00:19:31.141 "zone_append": false, 00:19:31.141 "compare": false, 00:19:31.141 "compare_and_write": false, 00:19:31.141 "abort": true, 00:19:31.141 "seek_hole": false, 00:19:31.141 "seek_data": false, 00:19:31.141 "copy": true, 00:19:31.141 "nvme_iov_md": false 00:19:31.141 }, 00:19:31.141 "memory_domains": [ 00:19:31.141 { 00:19:31.141 "dma_device_id": "system", 00:19:31.141 "dma_device_type": 1 00:19:31.141 }, 00:19:31.141 { 00:19:31.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.141 "dma_device_type": 2 00:19:31.141 } 00:19:31.141 ], 00:19:31.141 "driver_specific": {} 00:19:31.141 } 00:19:31.141 ] 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.141 "name": "Existed_Raid", 00:19:31.141 "uuid": "d36804aa-e28e-4108-921e-80b914a7ffdd", 00:19:31.141 "strip_size_kb": 0, 00:19:31.141 "state": "configuring", 00:19:31.141 "raid_level": "raid1", 00:19:31.141 "superblock": true, 00:19:31.141 "num_base_bdevs": 2, 00:19:31.141 "num_base_bdevs_discovered": 1, 00:19:31.141 "num_base_bdevs_operational": 2, 00:19:31.141 "base_bdevs_list": [ 00:19:31.141 { 00:19:31.141 "name": "BaseBdev1", 00:19:31.141 "uuid": "c688019c-d5ad-4f34-b696-5e85ab127ca2", 00:19:31.141 "is_configured": true, 00:19:31.141 "data_offset": 256, 00:19:31.141 "data_size": 7936 00:19:31.141 }, 00:19:31.141 { 00:19:31.141 "name": "BaseBdev2", 00:19:31.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.141 "is_configured": false, 00:19:31.141 "data_offset": 0, 00:19:31.141 "data_size": 0 00:19:31.141 } 00:19:31.141 ] 00:19:31.141 }' 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.141 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.709 [2024-11-20 14:36:32.529304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.709 [2024-11-20 14:36:32.529396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.709 [2024-11-20 14:36:32.537310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.709 [2024-11-20 14:36:32.539882] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:31.709 [2024-11-20 14:36:32.539935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.709 "name": "Existed_Raid", 00:19:31.709 "uuid": "2bcd0a26-cdce-4e2c-946b-fcaa8fb4a82f", 00:19:31.709 "strip_size_kb": 0, 00:19:31.709 "state": "configuring", 00:19:31.709 "raid_level": "raid1", 00:19:31.709 "superblock": true, 00:19:31.709 "num_base_bdevs": 2, 00:19:31.709 "num_base_bdevs_discovered": 1, 00:19:31.709 "num_base_bdevs_operational": 2, 00:19:31.709 "base_bdevs_list": [ 00:19:31.709 { 00:19:31.709 "name": "BaseBdev1", 00:19:31.709 "uuid": "c688019c-d5ad-4f34-b696-5e85ab127ca2", 00:19:31.709 "is_configured": true, 00:19:31.709 "data_offset": 256, 00:19:31.709 "data_size": 7936 00:19:31.709 }, 00:19:31.709 { 00:19:31.709 "name": "BaseBdev2", 00:19:31.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.709 "is_configured": false, 00:19:31.709 "data_offset": 0, 00:19:31.709 "data_size": 0 00:19:31.709 } 00:19:31.709 ] 00:19:31.709 }' 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.709 14:36:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 [2024-11-20 14:36:33.103040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.277 [2024-11-20 14:36:33.103370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:32.277 [2024-11-20 14:36:33.103410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:32.277 [2024-11-20 14:36:33.103512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:32.277 [2024-11-20 14:36:33.103704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:32.277 [2024-11-20 14:36:33.103736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:32.277 BaseBdev2 00:19:32.277 [2024-11-20 14:36:33.103855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 [ 00:19:32.277 { 00:19:32.277 "name": "BaseBdev2", 00:19:32.277 "aliases": [ 00:19:32.277 "c0086b8e-8158-4843-9286-d4ff1dadcf62" 00:19:32.277 ], 00:19:32.277 "product_name": "Malloc disk", 00:19:32.277 "block_size": 4096, 00:19:32.277 "num_blocks": 8192, 00:19:32.277 "uuid": "c0086b8e-8158-4843-9286-d4ff1dadcf62", 00:19:32.277 "md_size": 32, 00:19:32.277 "md_interleave": false, 00:19:32.277 "dif_type": 0, 00:19:32.277 "assigned_rate_limits": { 00:19:32.277 "rw_ios_per_sec": 0, 00:19:32.277 "rw_mbytes_per_sec": 0, 00:19:32.277 "r_mbytes_per_sec": 0, 00:19:32.277 "w_mbytes_per_sec": 0 00:19:32.277 }, 00:19:32.277 "claimed": true, 00:19:32.277 "claim_type": "exclusive_write", 00:19:32.277 "zoned": false, 00:19:32.277 "supported_io_types": { 00:19:32.277 "read": true, 00:19:32.277 "write": true, 00:19:32.277 "unmap": true, 00:19:32.277 "flush": true, 00:19:32.277 "reset": true, 00:19:32.277 "nvme_admin": false, 00:19:32.277 "nvme_io": false, 00:19:32.277 "nvme_io_md": false, 00:19:32.277 "write_zeroes": true, 00:19:32.277 "zcopy": true, 00:19:32.277 "get_zone_info": false, 00:19:32.277 "zone_management": false, 00:19:32.277 "zone_append": false, 00:19:32.277 "compare": false, 00:19:32.277 "compare_and_write": false, 00:19:32.277 "abort": true, 00:19:32.277 "seek_hole": false, 00:19:32.277 "seek_data": false, 00:19:32.277 "copy": true, 00:19:32.277 "nvme_iov_md": false 00:19:32.277 }, 00:19:32.277 "memory_domains": [ 00:19:32.277 { 00:19:32.277 "dma_device_id": "system", 00:19:32.277 "dma_device_type": 1 00:19:32.277 }, 00:19:32.277 { 00:19:32.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.277 "dma_device_type": 2 00:19:32.277 } 00:19:32.277 ], 00:19:32.277 "driver_specific": {} 00:19:32.277 } 00:19:32.277 ] 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.277 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.278 "name": "Existed_Raid", 00:19:32.278 "uuid": "2bcd0a26-cdce-4e2c-946b-fcaa8fb4a82f", 00:19:32.278 "strip_size_kb": 0, 00:19:32.278 "state": "online", 00:19:32.278 "raid_level": "raid1", 00:19:32.278 "superblock": true, 00:19:32.278 "num_base_bdevs": 2, 00:19:32.278 "num_base_bdevs_discovered": 2, 00:19:32.278 "num_base_bdevs_operational": 2, 00:19:32.278 "base_bdevs_list": [ 00:19:32.278 { 00:19:32.278 "name": "BaseBdev1", 00:19:32.278 "uuid": "c688019c-d5ad-4f34-b696-5e85ab127ca2", 00:19:32.278 "is_configured": true, 00:19:32.278 "data_offset": 256, 00:19:32.278 "data_size": 7936 00:19:32.278 }, 00:19:32.278 { 00:19:32.278 "name": "BaseBdev2", 00:19:32.278 "uuid": "c0086b8e-8158-4843-9286-d4ff1dadcf62", 00:19:32.278 "is_configured": true, 00:19:32.278 "data_offset": 256, 00:19:32.278 "data_size": 7936 00:19:32.278 } 00:19:32.278 ] 00:19:32.278 }' 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.278 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.843 [2024-11-20 14:36:33.659676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.843 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:32.843 "name": "Existed_Raid", 00:19:32.843 "aliases": [ 00:19:32.843 "2bcd0a26-cdce-4e2c-946b-fcaa8fb4a82f" 00:19:32.843 ], 00:19:32.843 "product_name": "Raid Volume", 00:19:32.843 "block_size": 4096, 00:19:32.843 "num_blocks": 7936, 00:19:32.843 "uuid": "2bcd0a26-cdce-4e2c-946b-fcaa8fb4a82f", 00:19:32.843 "md_size": 32, 00:19:32.843 "md_interleave": false, 00:19:32.843 "dif_type": 0, 00:19:32.843 "assigned_rate_limits": { 00:19:32.843 "rw_ios_per_sec": 0, 00:19:32.844 "rw_mbytes_per_sec": 0, 00:19:32.844 "r_mbytes_per_sec": 0, 00:19:32.844 "w_mbytes_per_sec": 0 00:19:32.844 }, 00:19:32.844 "claimed": false, 00:19:32.844 "zoned": false, 00:19:32.844 "supported_io_types": { 00:19:32.844 "read": true, 00:19:32.844 "write": true, 00:19:32.844 "unmap": false, 00:19:32.844 "flush": false, 00:19:32.844 "reset": true, 00:19:32.844 "nvme_admin": false, 00:19:32.844 "nvme_io": false, 00:19:32.844 "nvme_io_md": false, 00:19:32.844 "write_zeroes": true, 00:19:32.844 "zcopy": false, 00:19:32.844 "get_zone_info": false, 00:19:32.844 "zone_management": false, 00:19:32.844 "zone_append": false, 00:19:32.844 "compare": false, 00:19:32.844 "compare_and_write": false, 00:19:32.844 "abort": false, 00:19:32.844 "seek_hole": false, 00:19:32.844 "seek_data": false, 00:19:32.844 "copy": false, 00:19:32.844 "nvme_iov_md": false 00:19:32.844 }, 00:19:32.844 "memory_domains": [ 00:19:32.844 { 00:19:32.844 "dma_device_id": "system", 00:19:32.844 "dma_device_type": 1 00:19:32.844 }, 00:19:32.844 { 00:19:32.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.844 "dma_device_type": 2 00:19:32.844 }, 00:19:32.844 { 00:19:32.844 "dma_device_id": "system", 00:19:32.844 "dma_device_type": 1 00:19:32.844 }, 00:19:32.844 { 00:19:32.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.844 "dma_device_type": 2 00:19:32.844 } 00:19:32.844 ], 00:19:32.844 "driver_specific": { 00:19:32.844 "raid": { 00:19:32.844 "uuid": "2bcd0a26-cdce-4e2c-946b-fcaa8fb4a82f", 00:19:32.844 "strip_size_kb": 0, 00:19:32.844 "state": "online", 00:19:32.844 "raid_level": "raid1", 00:19:32.844 "superblock": true, 00:19:32.844 "num_base_bdevs": 2, 00:19:32.844 "num_base_bdevs_discovered": 2, 00:19:32.844 "num_base_bdevs_operational": 2, 00:19:32.844 "base_bdevs_list": [ 00:19:32.844 { 00:19:32.844 "name": "BaseBdev1", 00:19:32.844 "uuid": "c688019c-d5ad-4f34-b696-5e85ab127ca2", 00:19:32.844 "is_configured": true, 00:19:32.844 "data_offset": 256, 00:19:32.844 "data_size": 7936 00:19:32.844 }, 00:19:32.844 { 00:19:32.844 "name": "BaseBdev2", 00:19:32.844 "uuid": "c0086b8e-8158-4843-9286-d4ff1dadcf62", 00:19:32.844 "is_configured": true, 00:19:32.844 "data_offset": 256, 00:19:32.844 "data_size": 7936 00:19:32.844 } 00:19:32.844 ] 00:19:32.844 } 00:19:32.844 } 00:19:32.844 }' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:32.844 BaseBdev2' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.844 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.101 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:33.101 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:33.102 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:33.102 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.102 14:36:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.102 [2024-11-20 14:36:33.923349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.102 "name": "Existed_Raid", 00:19:33.102 "uuid": "2bcd0a26-cdce-4e2c-946b-fcaa8fb4a82f", 00:19:33.102 "strip_size_kb": 0, 00:19:33.102 "state": "online", 00:19:33.102 "raid_level": "raid1", 00:19:33.102 "superblock": true, 00:19:33.102 "num_base_bdevs": 2, 00:19:33.102 "num_base_bdevs_discovered": 1, 00:19:33.102 "num_base_bdevs_operational": 1, 00:19:33.102 "base_bdevs_list": [ 00:19:33.102 { 00:19:33.102 "name": null, 00:19:33.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.102 "is_configured": false, 00:19:33.102 "data_offset": 0, 00:19:33.102 "data_size": 7936 00:19:33.102 }, 00:19:33.102 { 00:19:33.102 "name": "BaseBdev2", 00:19:33.102 "uuid": "c0086b8e-8158-4843-9286-d4ff1dadcf62", 00:19:33.102 "is_configured": true, 00:19:33.102 "data_offset": 256, 00:19:33.102 "data_size": 7936 00:19:33.102 } 00:19:33.102 ] 00:19:33.102 }' 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.102 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.668 [2024-11-20 14:36:34.580055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:33.668 [2024-11-20 14:36:34.580361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.668 [2024-11-20 14:36:34.671401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.668 [2024-11-20 14:36:34.671755] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.668 [2024-11-20 14:36:34.671792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:33.668 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:33.669 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.669 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.669 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.669 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:33.669 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87717 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87717 ']' 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87717 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87717 00:19:33.927 killing process with pid 87717 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87717' 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87717 00:19:33.927 [2024-11-20 14:36:34.765329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.927 14:36:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87717 00:19:33.927 [2024-11-20 14:36:34.779745] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.862 14:36:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:34.862 00:19:34.862 real 0m5.533s 00:19:34.862 user 0m8.310s 00:19:34.862 sys 0m0.813s 00:19:34.862 14:36:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.862 14:36:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.862 ************************************ 00:19:34.862 END TEST raid_state_function_test_sb_md_separate 00:19:34.862 ************************************ 00:19:34.862 14:36:35 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:34.862 14:36:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:34.862 14:36:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.862 14:36:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.862 ************************************ 00:19:34.862 START TEST raid_superblock_test_md_separate 00:19:34.862 ************************************ 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87974 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:34.862 14:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87974 00:19:34.863 14:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87974 ']' 00:19:34.863 14:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.863 14:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.863 14:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.863 14:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.863 14:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.121 [2024-11-20 14:36:36.023111] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:19:35.121 [2024-11-20 14:36:36.023306] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87974 ] 00:19:35.379 [2024-11-20 14:36:36.208916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.379 [2024-11-20 14:36:36.340943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.637 [2024-11-20 14:36:36.541568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.637 [2024-11-20 14:36:36.541664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.205 14:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.205 14:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.206 14:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.206 malloc1 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.206 [2024-11-20 14:36:37.025957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:36.206 [2024-11-20 14:36:37.026270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.206 [2024-11-20 14:36:37.026353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:36.206 [2024-11-20 14:36:37.026597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.206 [2024-11-20 14:36:37.029262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.206 pt1 00:19:36.206 [2024-11-20 14:36:37.029460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.206 malloc2 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.206 [2024-11-20 14:36:37.076593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.206 [2024-11-20 14:36:37.076735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.206 [2024-11-20 14:36:37.076770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:36.206 [2024-11-20 14:36:37.076800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.206 [2024-11-20 14:36:37.079537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.206 pt2 00:19:36.206 [2024-11-20 14:36:37.079789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.206 [2024-11-20 14:36:37.084723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.206 [2024-11-20 14:36:37.087396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.206 [2024-11-20 14:36:37.087616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:36.206 [2024-11-20 14:36:37.087683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.206 [2024-11-20 14:36:37.087771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:36.206 [2024-11-20 14:36:37.087922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:36.206 [2024-11-20 14:36:37.087942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:36.206 [2024-11-20 14:36:37.088086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.206 "name": "raid_bdev1", 00:19:36.206 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:36.206 "strip_size_kb": 0, 00:19:36.206 "state": "online", 00:19:36.206 "raid_level": "raid1", 00:19:36.206 "superblock": true, 00:19:36.206 "num_base_bdevs": 2, 00:19:36.206 "num_base_bdevs_discovered": 2, 00:19:36.206 "num_base_bdevs_operational": 2, 00:19:36.206 "base_bdevs_list": [ 00:19:36.206 { 00:19:36.206 "name": "pt1", 00:19:36.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.206 "is_configured": true, 00:19:36.206 "data_offset": 256, 00:19:36.206 "data_size": 7936 00:19:36.206 }, 00:19:36.206 { 00:19:36.206 "name": "pt2", 00:19:36.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.206 "is_configured": true, 00:19:36.206 "data_offset": 256, 00:19:36.206 "data_size": 7936 00:19:36.206 } 00:19:36.206 ] 00:19:36.206 }' 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.206 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.773 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:36.773 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:36.773 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:36.773 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:36.773 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:36.773 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.774 [2024-11-20 14:36:37.625231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:36.774 "name": "raid_bdev1", 00:19:36.774 "aliases": [ 00:19:36.774 "3f9a6325-fef7-4035-b650-ce29594ee5a9" 00:19:36.774 ], 00:19:36.774 "product_name": "Raid Volume", 00:19:36.774 "block_size": 4096, 00:19:36.774 "num_blocks": 7936, 00:19:36.774 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:36.774 "md_size": 32, 00:19:36.774 "md_interleave": false, 00:19:36.774 "dif_type": 0, 00:19:36.774 "assigned_rate_limits": { 00:19:36.774 "rw_ios_per_sec": 0, 00:19:36.774 "rw_mbytes_per_sec": 0, 00:19:36.774 "r_mbytes_per_sec": 0, 00:19:36.774 "w_mbytes_per_sec": 0 00:19:36.774 }, 00:19:36.774 "claimed": false, 00:19:36.774 "zoned": false, 00:19:36.774 "supported_io_types": { 00:19:36.774 "read": true, 00:19:36.774 "write": true, 00:19:36.774 "unmap": false, 00:19:36.774 "flush": false, 00:19:36.774 "reset": true, 00:19:36.774 "nvme_admin": false, 00:19:36.774 "nvme_io": false, 00:19:36.774 "nvme_io_md": false, 00:19:36.774 "write_zeroes": true, 00:19:36.774 "zcopy": false, 00:19:36.774 "get_zone_info": false, 00:19:36.774 "zone_management": false, 00:19:36.774 "zone_append": false, 00:19:36.774 "compare": false, 00:19:36.774 "compare_and_write": false, 00:19:36.774 "abort": false, 00:19:36.774 "seek_hole": false, 00:19:36.774 "seek_data": false, 00:19:36.774 "copy": false, 00:19:36.774 "nvme_iov_md": false 00:19:36.774 }, 00:19:36.774 "memory_domains": [ 00:19:36.774 { 00:19:36.774 "dma_device_id": "system", 00:19:36.774 "dma_device_type": 1 00:19:36.774 }, 00:19:36.774 { 00:19:36.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.774 "dma_device_type": 2 00:19:36.774 }, 00:19:36.774 { 00:19:36.774 "dma_device_id": "system", 00:19:36.774 "dma_device_type": 1 00:19:36.774 }, 00:19:36.774 { 00:19:36.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.774 "dma_device_type": 2 00:19:36.774 } 00:19:36.774 ], 00:19:36.774 "driver_specific": { 00:19:36.774 "raid": { 00:19:36.774 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:36.774 "strip_size_kb": 0, 00:19:36.774 "state": "online", 00:19:36.774 "raid_level": "raid1", 00:19:36.774 "superblock": true, 00:19:36.774 "num_base_bdevs": 2, 00:19:36.774 "num_base_bdevs_discovered": 2, 00:19:36.774 "num_base_bdevs_operational": 2, 00:19:36.774 "base_bdevs_list": [ 00:19:36.774 { 00:19:36.774 "name": "pt1", 00:19:36.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:36.774 "is_configured": true, 00:19:36.774 "data_offset": 256, 00:19:36.774 "data_size": 7936 00:19:36.774 }, 00:19:36.774 { 00:19:36.774 "name": "pt2", 00:19:36.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.774 "is_configured": true, 00:19:36.774 "data_offset": 256, 00:19:36.774 "data_size": 7936 00:19:36.774 } 00:19:36.774 ] 00:19:36.774 } 00:19:36.774 } 00:19:36.774 }' 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:36.774 pt2' 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.774 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.033 [2024-11-20 14:36:37.897274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3f9a6325-fef7-4035-b650-ce29594ee5a9 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 3f9a6325-fef7-4035-b650-ce29594ee5a9 ']' 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.033 [2024-11-20 14:36:37.944877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.033 [2024-11-20 14:36:37.945075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.033 [2024-11-20 14:36:37.945277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.033 [2024-11-20 14:36:37.945452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.033 [2024-11-20 14:36:37.945594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.033 14:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.033 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.034 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.034 [2024-11-20 14:36:38.084933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:37.292 [2024-11-20 14:36:38.087982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:37.292 [2024-11-20 14:36:38.088115] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:37.292 [2024-11-20 14:36:38.088220] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:37.292 [2024-11-20 14:36:38.088245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.292 [2024-11-20 14:36:38.088260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:37.292 request: 00:19:37.292 { 00:19:37.292 "name": "raid_bdev1", 00:19:37.292 "raid_level": "raid1", 00:19:37.292 "base_bdevs": [ 00:19:37.292 "malloc1", 00:19:37.292 "malloc2" 00:19:37.292 ], 00:19:37.292 "superblock": false, 00:19:37.292 "method": "bdev_raid_create", 00:19:37.292 "req_id": 1 00:19:37.292 } 00:19:37.292 Got JSON-RPC error response 00:19:37.292 response: 00:19:37.292 { 00:19:37.292 "code": -17, 00:19:37.292 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:37.292 } 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.292 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.293 [2024-11-20 14:36:38.177196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:37.293 [2024-11-20 14:36:38.177276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.293 [2024-11-20 14:36:38.177302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:37.293 [2024-11-20 14:36:38.177318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.293 [2024-11-20 14:36:38.180076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.293 [2024-11-20 14:36:38.180137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:37.293 [2024-11-20 14:36:38.180196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:37.293 [2024-11-20 14:36:38.180281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:37.293 pt1 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.293 "name": "raid_bdev1", 00:19:37.293 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:37.293 "strip_size_kb": 0, 00:19:37.293 "state": "configuring", 00:19:37.293 "raid_level": "raid1", 00:19:37.293 "superblock": true, 00:19:37.293 "num_base_bdevs": 2, 00:19:37.293 "num_base_bdevs_discovered": 1, 00:19:37.293 "num_base_bdevs_operational": 2, 00:19:37.293 "base_bdevs_list": [ 00:19:37.293 { 00:19:37.293 "name": "pt1", 00:19:37.293 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.293 "is_configured": true, 00:19:37.293 "data_offset": 256, 00:19:37.293 "data_size": 7936 00:19:37.293 }, 00:19:37.293 { 00:19:37.293 "name": null, 00:19:37.293 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.293 "is_configured": false, 00:19:37.293 "data_offset": 256, 00:19:37.293 "data_size": 7936 00:19:37.293 } 00:19:37.293 ] 00:19:37.293 }' 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.293 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.859 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:37.859 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:37.859 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:37.859 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.859 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.859 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.859 [2024-11-20 14:36:38.721383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.859 [2024-11-20 14:36:38.721703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.859 [2024-11-20 14:36:38.721861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:37.859 [2024-11-20 14:36:38.722010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.859 [2024-11-20 14:36:38.722392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.859 [2024-11-20 14:36:38.722443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.859 [2024-11-20 14:36:38.722539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:37.859 [2024-11-20 14:36:38.722581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.859 [2024-11-20 14:36:38.722768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:37.859 [2024-11-20 14:36:38.722791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:37.860 [2024-11-20 14:36:38.722892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:37.860 [2024-11-20 14:36:38.723060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:37.860 [2024-11-20 14:36:38.723081] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:37.860 [2024-11-20 14:36:38.723207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.860 pt2 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.860 "name": "raid_bdev1", 00:19:37.860 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:37.860 "strip_size_kb": 0, 00:19:37.860 "state": "online", 00:19:37.860 "raid_level": "raid1", 00:19:37.860 "superblock": true, 00:19:37.860 "num_base_bdevs": 2, 00:19:37.860 "num_base_bdevs_discovered": 2, 00:19:37.860 "num_base_bdevs_operational": 2, 00:19:37.860 "base_bdevs_list": [ 00:19:37.860 { 00:19:37.860 "name": "pt1", 00:19:37.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:37.860 "is_configured": true, 00:19:37.860 "data_offset": 256, 00:19:37.860 "data_size": 7936 00:19:37.860 }, 00:19:37.860 { 00:19:37.860 "name": "pt2", 00:19:37.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.860 "is_configured": true, 00:19:37.860 "data_offset": 256, 00:19:37.860 "data_size": 7936 00:19:37.860 } 00:19:37.860 ] 00:19:37.860 }' 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.860 14:36:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.427 [2024-11-20 14:36:39.257922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:38.427 "name": "raid_bdev1", 00:19:38.427 "aliases": [ 00:19:38.427 "3f9a6325-fef7-4035-b650-ce29594ee5a9" 00:19:38.427 ], 00:19:38.427 "product_name": "Raid Volume", 00:19:38.427 "block_size": 4096, 00:19:38.427 "num_blocks": 7936, 00:19:38.427 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:38.427 "md_size": 32, 00:19:38.427 "md_interleave": false, 00:19:38.427 "dif_type": 0, 00:19:38.427 "assigned_rate_limits": { 00:19:38.427 "rw_ios_per_sec": 0, 00:19:38.427 "rw_mbytes_per_sec": 0, 00:19:38.427 "r_mbytes_per_sec": 0, 00:19:38.427 "w_mbytes_per_sec": 0 00:19:38.427 }, 00:19:38.427 "claimed": false, 00:19:38.427 "zoned": false, 00:19:38.427 "supported_io_types": { 00:19:38.427 "read": true, 00:19:38.427 "write": true, 00:19:38.427 "unmap": false, 00:19:38.427 "flush": false, 00:19:38.427 "reset": true, 00:19:38.427 "nvme_admin": false, 00:19:38.427 "nvme_io": false, 00:19:38.427 "nvme_io_md": false, 00:19:38.427 "write_zeroes": true, 00:19:38.427 "zcopy": false, 00:19:38.427 "get_zone_info": false, 00:19:38.427 "zone_management": false, 00:19:38.427 "zone_append": false, 00:19:38.427 "compare": false, 00:19:38.427 "compare_and_write": false, 00:19:38.427 "abort": false, 00:19:38.427 "seek_hole": false, 00:19:38.427 "seek_data": false, 00:19:38.427 "copy": false, 00:19:38.427 "nvme_iov_md": false 00:19:38.427 }, 00:19:38.427 "memory_domains": [ 00:19:38.427 { 00:19:38.427 "dma_device_id": "system", 00:19:38.427 "dma_device_type": 1 00:19:38.427 }, 00:19:38.427 { 00:19:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.427 "dma_device_type": 2 00:19:38.427 }, 00:19:38.427 { 00:19:38.427 "dma_device_id": "system", 00:19:38.427 "dma_device_type": 1 00:19:38.427 }, 00:19:38.427 { 00:19:38.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.427 "dma_device_type": 2 00:19:38.427 } 00:19:38.427 ], 00:19:38.427 "driver_specific": { 00:19:38.427 "raid": { 00:19:38.427 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:38.427 "strip_size_kb": 0, 00:19:38.427 "state": "online", 00:19:38.427 "raid_level": "raid1", 00:19:38.427 "superblock": true, 00:19:38.427 "num_base_bdevs": 2, 00:19:38.427 "num_base_bdevs_discovered": 2, 00:19:38.427 "num_base_bdevs_operational": 2, 00:19:38.427 "base_bdevs_list": [ 00:19:38.427 { 00:19:38.427 "name": "pt1", 00:19:38.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.427 "is_configured": true, 00:19:38.427 "data_offset": 256, 00:19:38.427 "data_size": 7936 00:19:38.427 }, 00:19:38.427 { 00:19:38.427 "name": "pt2", 00:19:38.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.427 "is_configured": true, 00:19:38.427 "data_offset": 256, 00:19:38.427 "data_size": 7936 00:19:38.427 } 00:19:38.427 ] 00:19:38.427 } 00:19:38.427 } 00:19:38.427 }' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:38.427 pt2' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:38.427 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.428 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:38.428 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.428 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.686 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:38.687 [2024-11-20 14:36:39.517895] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 3f9a6325-fef7-4035-b650-ce29594ee5a9 '!=' 3f9a6325-fef7-4035-b650-ce29594ee5a9 ']' 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.687 [2024-11-20 14:36:39.577584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.687 "name": "raid_bdev1", 00:19:38.687 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:38.687 "strip_size_kb": 0, 00:19:38.687 "state": "online", 00:19:38.687 "raid_level": "raid1", 00:19:38.687 "superblock": true, 00:19:38.687 "num_base_bdevs": 2, 00:19:38.687 "num_base_bdevs_discovered": 1, 00:19:38.687 "num_base_bdevs_operational": 1, 00:19:38.687 "base_bdevs_list": [ 00:19:38.687 { 00:19:38.687 "name": null, 00:19:38.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.687 "is_configured": false, 00:19:38.687 "data_offset": 0, 00:19:38.687 "data_size": 7936 00:19:38.687 }, 00:19:38.687 { 00:19:38.687 "name": "pt2", 00:19:38.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.687 "is_configured": true, 00:19:38.687 "data_offset": 256, 00:19:38.687 "data_size": 7936 00:19:38.687 } 00:19:38.687 ] 00:19:38.687 }' 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.687 14:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.253 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.253 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.253 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.253 [2024-11-20 14:36:40.145785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.253 [2024-11-20 14:36:40.145962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.253 [2024-11-20 14:36:40.146110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.254 [2024-11-20 14:36:40.146180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.254 [2024-11-20 14:36:40.146232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.254 [2024-11-20 14:36:40.229748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:39.254 [2024-11-20 14:36:40.229964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.254 [2024-11-20 14:36:40.230114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:39.254 [2024-11-20 14:36:40.230265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.254 [2024-11-20 14:36:40.233235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.254 [2024-11-20 14:36:40.233435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:39.254 [2024-11-20 14:36:40.233621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:39.254 [2024-11-20 14:36:40.233812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:39.254 [2024-11-20 14:36:40.234051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:39.254 [2024-11-20 14:36:40.234221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:39.254 [2024-11-20 14:36:40.234432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:39.254 pt2 00:19:39.254 [2024-11-20 14:36:40.234760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:39.254 [2024-11-20 14:36:40.234784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:39.254 [2024-11-20 14:36:40.234969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.254 "name": "raid_bdev1", 00:19:39.254 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:39.254 "strip_size_kb": 0, 00:19:39.254 "state": "online", 00:19:39.254 "raid_level": "raid1", 00:19:39.254 "superblock": true, 00:19:39.254 "num_base_bdevs": 2, 00:19:39.254 "num_base_bdevs_discovered": 1, 00:19:39.254 "num_base_bdevs_operational": 1, 00:19:39.254 "base_bdevs_list": [ 00:19:39.254 { 00:19:39.254 "name": null, 00:19:39.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.254 "is_configured": false, 00:19:39.254 "data_offset": 256, 00:19:39.254 "data_size": 7936 00:19:39.254 }, 00:19:39.254 { 00:19:39.254 "name": "pt2", 00:19:39.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.254 "is_configured": true, 00:19:39.254 "data_offset": 256, 00:19:39.254 "data_size": 7936 00:19:39.254 } 00:19:39.254 ] 00:19:39.254 }' 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.254 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.821 [2024-11-20 14:36:40.758337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.821 [2024-11-20 14:36:40.758379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.821 [2024-11-20 14:36:40.758480] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.821 [2024-11-20 14:36:40.758581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.821 [2024-11-20 14:36:40.758597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.821 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.822 [2024-11-20 14:36:40.822366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.822 [2024-11-20 14:36:40.822578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.822 [2024-11-20 14:36:40.822676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:39.822 [2024-11-20 14:36:40.822937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.822 [2024-11-20 14:36:40.825734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.822 [2024-11-20 14:36:40.825912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.822 [2024-11-20 14:36:40.826006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:39.822 [2024-11-20 14:36:40.826070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:39.822 [2024-11-20 14:36:40.826265] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:39.822 [2024-11-20 14:36:40.826284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.822 [2024-11-20 14:36:40.826309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:39.822 [2024-11-20 14:36:40.826392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:39.822 [2024-11-20 14:36:40.826546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:39.822 [2024-11-20 14:36:40.826564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:39.822 [2024-11-20 14:36:40.826699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:39.822 pt1 00:19:39.822 [2024-11-20 14:36:40.826843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:39.822 [2024-11-20 14:36:40.826863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:39.822 [2024-11-20 14:36:40.827009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.822 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.079 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.079 "name": "raid_bdev1", 00:19:40.079 "uuid": "3f9a6325-fef7-4035-b650-ce29594ee5a9", 00:19:40.079 "strip_size_kb": 0, 00:19:40.079 "state": "online", 00:19:40.079 "raid_level": "raid1", 00:19:40.079 "superblock": true, 00:19:40.079 "num_base_bdevs": 2, 00:19:40.079 "num_base_bdevs_discovered": 1, 00:19:40.079 "num_base_bdevs_operational": 1, 00:19:40.079 "base_bdevs_list": [ 00:19:40.079 { 00:19:40.079 "name": null, 00:19:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.079 "is_configured": false, 00:19:40.079 "data_offset": 256, 00:19:40.079 "data_size": 7936 00:19:40.079 }, 00:19:40.079 { 00:19:40.079 "name": "pt2", 00:19:40.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.079 "is_configured": true, 00:19:40.079 "data_offset": 256, 00:19:40.079 "data_size": 7936 00:19:40.079 } 00:19:40.079 ] 00:19:40.079 }' 00:19:40.079 14:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.079 14:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.337 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:40.596 [2024-11-20 14:36:41.394892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 3f9a6325-fef7-4035-b650-ce29594ee5a9 '!=' 3f9a6325-fef7-4035-b650-ce29594ee5a9 ']' 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87974 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87974 ']' 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87974 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87974 00:19:40.596 killing process with pid 87974 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87974' 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87974 00:19:40.596 [2024-11-20 14:36:41.477375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.596 [2024-11-20 14:36:41.477487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.596 14:36:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87974 00:19:40.596 [2024-11-20 14:36:41.477551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.596 [2024-11-20 14:36:41.477575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:40.855 [2024-11-20 14:36:41.652762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:41.790 14:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:41.790 00:19:41.790 real 0m6.744s 00:19:41.790 user 0m10.757s 00:19:41.790 sys 0m0.978s 00:19:41.790 14:36:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.790 14:36:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.790 ************************************ 00:19:41.790 END TEST raid_superblock_test_md_separate 00:19:41.790 ************************************ 00:19:41.790 14:36:42 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:41.790 14:36:42 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:41.790 14:36:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:41.790 14:36:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.790 14:36:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:41.790 ************************************ 00:19:41.790 START TEST raid_rebuild_test_sb_md_separate 00:19:41.790 ************************************ 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88302 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88302 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:41.790 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88302 ']' 00:19:41.791 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.791 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.791 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.791 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.791 14:36:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.791 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:41.791 Zero copy mechanism will not be used. 00:19:41.791 [2024-11-20 14:36:42.825916] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:19:41.791 [2024-11-20 14:36:42.826097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88302 ] 00:19:42.048 [2024-11-20 14:36:43.011916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.306 [2024-11-20 14:36:43.143517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.306 [2024-11-20 14:36:43.344917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.306 [2024-11-20 14:36:43.344969] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.873 BaseBdev1_malloc 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.873 [2024-11-20 14:36:43.846555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:42.873 [2024-11-20 14:36:43.847519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.873 [2024-11-20 14:36:43.847566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:42.873 [2024-11-20 14:36:43.847589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.873 [2024-11-20 14:36:43.850203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.873 [2024-11-20 14:36:43.850254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:42.873 BaseBdev1 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.873 BaseBdev2_malloc 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.873 [2024-11-20 14:36:43.903805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:42.873 [2024-11-20 14:36:43.904016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.873 [2024-11-20 14:36:43.904057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:42.873 [2024-11-20 14:36:43.904078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.873 [2024-11-20 14:36:43.906548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.873 [2024-11-20 14:36:43.906600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:42.873 BaseBdev2 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.873 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.132 spare_malloc 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.132 spare_delay 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.132 [2024-11-20 14:36:43.973185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.132 [2024-11-20 14:36:43.973393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.132 [2024-11-20 14:36:43.973470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:43.132 [2024-11-20 14:36:43.973600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.132 [2024-11-20 14:36:43.976250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.132 spare 00:19:43.132 [2024-11-20 14:36:43.976415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.132 [2024-11-20 14:36:43.981334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.132 [2024-11-20 14:36:43.983910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.132 [2024-11-20 14:36:43.984291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:43.132 [2024-11-20 14:36:43.984436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:43.132 [2024-11-20 14:36:43.984590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:43.132 [2024-11-20 14:36:43.984915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:43.132 [2024-11-20 14:36:43.984942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:43.132 [2024-11-20 14:36:43.985100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.132 14:36:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.132 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.132 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.132 "name": "raid_bdev1", 00:19:43.132 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:43.132 "strip_size_kb": 0, 00:19:43.132 "state": "online", 00:19:43.132 "raid_level": "raid1", 00:19:43.133 "superblock": true, 00:19:43.133 "num_base_bdevs": 2, 00:19:43.133 "num_base_bdevs_discovered": 2, 00:19:43.133 "num_base_bdevs_operational": 2, 00:19:43.133 "base_bdevs_list": [ 00:19:43.133 { 00:19:43.133 "name": "BaseBdev1", 00:19:43.133 "uuid": "2545245a-e6f6-508c-8a8b-6d8e5895c218", 00:19:43.133 "is_configured": true, 00:19:43.133 "data_offset": 256, 00:19:43.133 "data_size": 7936 00:19:43.133 }, 00:19:43.133 { 00:19:43.133 "name": "BaseBdev2", 00:19:43.133 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:43.133 "is_configured": true, 00:19:43.133 "data_offset": 256, 00:19:43.133 "data_size": 7936 00:19:43.133 } 00:19:43.133 ] 00:19:43.133 }' 00:19:43.133 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.133 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:43.700 [2024-11-20 14:36:44.501879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:43.700 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:43.701 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:43.701 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:43.701 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:43.701 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:43.701 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:43.959 [2024-11-20 14:36:44.893733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:43.959 /dev/nbd0 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:43.959 1+0 records in 00:19:43.959 1+0 records out 00:19:43.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552323 s, 7.4 MB/s 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:43.959 14:36:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:44.892 7936+0 records in 00:19:44.892 7936+0 records out 00:19:44.892 32505856 bytes (33 MB, 31 MiB) copied, 0.895457 s, 36.3 MB/s 00:19:44.892 14:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:44.892 14:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.892 14:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:44.892 14:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:44.892 14:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:44.892 14:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:44.892 14:36:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:45.150 [2024-11-20 14:36:46.146355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.150 [2024-11-20 14:36:46.158879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.150 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.151 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.151 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.151 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.151 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.151 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.409 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.409 "name": "raid_bdev1", 00:19:45.409 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:45.409 "strip_size_kb": 0, 00:19:45.409 "state": "online", 00:19:45.409 "raid_level": "raid1", 00:19:45.409 "superblock": true, 00:19:45.409 "num_base_bdevs": 2, 00:19:45.409 "num_base_bdevs_discovered": 1, 00:19:45.409 "num_base_bdevs_operational": 1, 00:19:45.409 "base_bdevs_list": [ 00:19:45.409 { 00:19:45.409 "name": null, 00:19:45.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.409 "is_configured": false, 00:19:45.409 "data_offset": 0, 00:19:45.409 "data_size": 7936 00:19:45.409 }, 00:19:45.409 { 00:19:45.409 "name": "BaseBdev2", 00:19:45.410 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:45.410 "is_configured": true, 00:19:45.410 "data_offset": 256, 00:19:45.410 "data_size": 7936 00:19:45.410 } 00:19:45.410 ] 00:19:45.410 }' 00:19:45.410 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.410 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.689 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:45.689 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.689 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.689 [2024-11-20 14:36:46.675070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:45.689 [2024-11-20 14:36:46.689199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:45.689 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.689 14:36:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:45.689 [2024-11-20 14:36:46.691760] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.063 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.063 "name": "raid_bdev1", 00:19:47.063 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:47.063 "strip_size_kb": 0, 00:19:47.063 "state": "online", 00:19:47.063 "raid_level": "raid1", 00:19:47.063 "superblock": true, 00:19:47.063 "num_base_bdevs": 2, 00:19:47.063 "num_base_bdevs_discovered": 2, 00:19:47.063 "num_base_bdevs_operational": 2, 00:19:47.063 "process": { 00:19:47.063 "type": "rebuild", 00:19:47.063 "target": "spare", 00:19:47.063 "progress": { 00:19:47.063 "blocks": 2560, 00:19:47.063 "percent": 32 00:19:47.063 } 00:19:47.063 }, 00:19:47.063 "base_bdevs_list": [ 00:19:47.063 { 00:19:47.063 "name": "spare", 00:19:47.063 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:47.063 "is_configured": true, 00:19:47.063 "data_offset": 256, 00:19:47.063 "data_size": 7936 00:19:47.063 }, 00:19:47.063 { 00:19:47.063 "name": "BaseBdev2", 00:19:47.063 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:47.064 "is_configured": true, 00:19:47.064 "data_offset": 256, 00:19:47.064 "data_size": 7936 00:19:47.064 } 00:19:47.064 ] 00:19:47.064 }' 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.064 [2024-11-20 14:36:47.864980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.064 [2024-11-20 14:36:47.900976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:47.064 [2024-11-20 14:36:47.901253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.064 [2024-11-20 14:36:47.901399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.064 [2024-11-20 14:36:47.901436] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.064 "name": "raid_bdev1", 00:19:47.064 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:47.064 "strip_size_kb": 0, 00:19:47.064 "state": "online", 00:19:47.064 "raid_level": "raid1", 00:19:47.064 "superblock": true, 00:19:47.064 "num_base_bdevs": 2, 00:19:47.064 "num_base_bdevs_discovered": 1, 00:19:47.064 "num_base_bdevs_operational": 1, 00:19:47.064 "base_bdevs_list": [ 00:19:47.064 { 00:19:47.064 "name": null, 00:19:47.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.064 "is_configured": false, 00:19:47.064 "data_offset": 0, 00:19:47.064 "data_size": 7936 00:19:47.064 }, 00:19:47.064 { 00:19:47.064 "name": "BaseBdev2", 00:19:47.064 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:47.064 "is_configured": true, 00:19:47.064 "data_offset": 256, 00:19:47.064 "data_size": 7936 00:19:47.064 } 00:19:47.064 ] 00:19:47.064 }' 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.064 14:36:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.631 "name": "raid_bdev1", 00:19:47.631 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:47.631 "strip_size_kb": 0, 00:19:47.631 "state": "online", 00:19:47.631 "raid_level": "raid1", 00:19:47.631 "superblock": true, 00:19:47.631 "num_base_bdevs": 2, 00:19:47.631 "num_base_bdevs_discovered": 1, 00:19:47.631 "num_base_bdevs_operational": 1, 00:19:47.631 "base_bdevs_list": [ 00:19:47.631 { 00:19:47.631 "name": null, 00:19:47.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.631 "is_configured": false, 00:19:47.631 "data_offset": 0, 00:19:47.631 "data_size": 7936 00:19:47.631 }, 00:19:47.631 { 00:19:47.631 "name": "BaseBdev2", 00:19:47.631 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:47.631 "is_configured": true, 00:19:47.631 "data_offset": 256, 00:19:47.631 "data_size": 7936 00:19:47.631 } 00:19:47.631 ] 00:19:47.631 }' 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.631 [2024-11-20 14:36:48.588204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:47.631 [2024-11-20 14:36:48.601897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.631 14:36:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:47.631 [2024-11-20 14:36:48.604521] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.566 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.824 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.824 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.824 "name": "raid_bdev1", 00:19:48.824 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:48.824 "strip_size_kb": 0, 00:19:48.824 "state": "online", 00:19:48.824 "raid_level": "raid1", 00:19:48.825 "superblock": true, 00:19:48.825 "num_base_bdevs": 2, 00:19:48.825 "num_base_bdevs_discovered": 2, 00:19:48.825 "num_base_bdevs_operational": 2, 00:19:48.825 "process": { 00:19:48.825 "type": "rebuild", 00:19:48.825 "target": "spare", 00:19:48.825 "progress": { 00:19:48.825 "blocks": 2560, 00:19:48.825 "percent": 32 00:19:48.825 } 00:19:48.825 }, 00:19:48.825 "base_bdevs_list": [ 00:19:48.825 { 00:19:48.825 "name": "spare", 00:19:48.825 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:48.825 "is_configured": true, 00:19:48.825 "data_offset": 256, 00:19:48.825 "data_size": 7936 00:19:48.825 }, 00:19:48.825 { 00:19:48.825 "name": "BaseBdev2", 00:19:48.825 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:48.825 "is_configured": true, 00:19:48.825 "data_offset": 256, 00:19:48.825 "data_size": 7936 00:19:48.825 } 00:19:48.825 ] 00:19:48.825 }' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:48.825 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=771 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.825 "name": "raid_bdev1", 00:19:48.825 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:48.825 "strip_size_kb": 0, 00:19:48.825 "state": "online", 00:19:48.825 "raid_level": "raid1", 00:19:48.825 "superblock": true, 00:19:48.825 "num_base_bdevs": 2, 00:19:48.825 "num_base_bdevs_discovered": 2, 00:19:48.825 "num_base_bdevs_operational": 2, 00:19:48.825 "process": { 00:19:48.825 "type": "rebuild", 00:19:48.825 "target": "spare", 00:19:48.825 "progress": { 00:19:48.825 "blocks": 2816, 00:19:48.825 "percent": 35 00:19:48.825 } 00:19:48.825 }, 00:19:48.825 "base_bdevs_list": [ 00:19:48.825 { 00:19:48.825 "name": "spare", 00:19:48.825 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:48.825 "is_configured": true, 00:19:48.825 "data_offset": 256, 00:19:48.825 "data_size": 7936 00:19:48.825 }, 00:19:48.825 { 00:19:48.825 "name": "BaseBdev2", 00:19:48.825 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:48.825 "is_configured": true, 00:19:48.825 "data_offset": 256, 00:19:48.825 "data_size": 7936 00:19:48.825 } 00:19:48.825 ] 00:19:48.825 }' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:48.825 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.083 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.083 14:36:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.016 "name": "raid_bdev1", 00:19:50.016 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:50.016 "strip_size_kb": 0, 00:19:50.016 "state": "online", 00:19:50.016 "raid_level": "raid1", 00:19:50.016 "superblock": true, 00:19:50.016 "num_base_bdevs": 2, 00:19:50.016 "num_base_bdevs_discovered": 2, 00:19:50.016 "num_base_bdevs_operational": 2, 00:19:50.016 "process": { 00:19:50.016 "type": "rebuild", 00:19:50.016 "target": "spare", 00:19:50.016 "progress": { 00:19:50.016 "blocks": 5888, 00:19:50.016 "percent": 74 00:19:50.016 } 00:19:50.016 }, 00:19:50.016 "base_bdevs_list": [ 00:19:50.016 { 00:19:50.016 "name": "spare", 00:19:50.016 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:50.016 "is_configured": true, 00:19:50.016 "data_offset": 256, 00:19:50.016 "data_size": 7936 00:19:50.016 }, 00:19:50.016 { 00:19:50.016 "name": "BaseBdev2", 00:19:50.016 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:50.016 "is_configured": true, 00:19:50.016 "data_offset": 256, 00:19:50.016 "data_size": 7936 00:19:50.016 } 00:19:50.016 ] 00:19:50.016 }' 00:19:50.016 14:36:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.016 14:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.016 14:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.274 14:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.274 14:36:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:50.840 [2024-11-20 14:36:51.725664] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:50.840 [2024-11-20 14:36:51.725779] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:50.840 [2024-11-20 14:36:51.725923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.099 "name": "raid_bdev1", 00:19:51.099 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:51.099 "strip_size_kb": 0, 00:19:51.099 "state": "online", 00:19:51.099 "raid_level": "raid1", 00:19:51.099 "superblock": true, 00:19:51.099 "num_base_bdevs": 2, 00:19:51.099 "num_base_bdevs_discovered": 2, 00:19:51.099 "num_base_bdevs_operational": 2, 00:19:51.099 "base_bdevs_list": [ 00:19:51.099 { 00:19:51.099 "name": "spare", 00:19:51.099 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:51.099 "is_configured": true, 00:19:51.099 "data_offset": 256, 00:19:51.099 "data_size": 7936 00:19:51.099 }, 00:19:51.099 { 00:19:51.099 "name": "BaseBdev2", 00:19:51.099 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:51.099 "is_configured": true, 00:19:51.099 "data_offset": 256, 00:19:51.099 "data_size": 7936 00:19:51.099 } 00:19:51.099 ] 00:19:51.099 }' 00:19:51.099 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.358 "name": "raid_bdev1", 00:19:51.358 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:51.358 "strip_size_kb": 0, 00:19:51.358 "state": "online", 00:19:51.358 "raid_level": "raid1", 00:19:51.358 "superblock": true, 00:19:51.358 "num_base_bdevs": 2, 00:19:51.358 "num_base_bdevs_discovered": 2, 00:19:51.358 "num_base_bdevs_operational": 2, 00:19:51.358 "base_bdevs_list": [ 00:19:51.358 { 00:19:51.358 "name": "spare", 00:19:51.358 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:51.358 "is_configured": true, 00:19:51.358 "data_offset": 256, 00:19:51.358 "data_size": 7936 00:19:51.358 }, 00:19:51.358 { 00:19:51.358 "name": "BaseBdev2", 00:19:51.358 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:51.358 "is_configured": true, 00:19:51.358 "data_offset": 256, 00:19:51.358 "data_size": 7936 00:19:51.358 } 00:19:51.358 ] 00:19:51.358 }' 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:51.358 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.617 "name": "raid_bdev1", 00:19:51.617 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:51.617 "strip_size_kb": 0, 00:19:51.617 "state": "online", 00:19:51.617 "raid_level": "raid1", 00:19:51.617 "superblock": true, 00:19:51.617 "num_base_bdevs": 2, 00:19:51.617 "num_base_bdevs_discovered": 2, 00:19:51.617 "num_base_bdevs_operational": 2, 00:19:51.617 "base_bdevs_list": [ 00:19:51.617 { 00:19:51.617 "name": "spare", 00:19:51.617 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:51.617 "is_configured": true, 00:19:51.617 "data_offset": 256, 00:19:51.617 "data_size": 7936 00:19:51.617 }, 00:19:51.617 { 00:19:51.617 "name": "BaseBdev2", 00:19:51.617 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:51.617 "is_configured": true, 00:19:51.617 "data_offset": 256, 00:19:51.617 "data_size": 7936 00:19:51.617 } 00:19:51.617 ] 00:19:51.617 }' 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.617 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.184 [2024-11-20 14:36:52.952316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:52.184 [2024-11-20 14:36:52.952537] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.184 [2024-11-20 14:36:52.952815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.184 [2024-11-20 14:36:52.953033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.184 [2024-11-20 14:36:52.953188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.184 14:36:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:52.184 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:52.443 /dev/nbd0 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:52.443 1+0 records in 00:19:52.443 1+0 records out 00:19:52.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564524 s, 7.3 MB/s 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:52.443 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:52.702 /dev/nbd1 00:19:52.702 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:52.702 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:52.702 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:52.702 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:52.702 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:52.702 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:52.702 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:52.703 1+0 records in 00:19:52.703 1+0 records out 00:19:52.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361232 s, 11.3 MB/s 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:52.703 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:52.961 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:52.961 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:52.961 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:52.961 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:52.961 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:52.961 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.961 14:36:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:53.220 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.501 [2024-11-20 14:36:54.419111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:53.501 [2024-11-20 14:36:54.419359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.501 [2024-11-20 14:36:54.419527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:53.501 [2024-11-20 14:36:54.419668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.501 [2024-11-20 14:36:54.422421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.501 [2024-11-20 14:36:54.422468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:53.501 [2024-11-20 14:36:54.422557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:53.501 [2024-11-20 14:36:54.422654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.501 spare 00:19:53.501 [2024-11-20 14:36:54.422843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.501 [2024-11-20 14:36:54.522960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:53.501 [2024-11-20 14:36:54.523028] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:53.501 [2024-11-20 14:36:54.523164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:53.501 [2024-11-20 14:36:54.523331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:53.501 [2024-11-20 14:36:54.523347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:53.501 [2024-11-20 14:36:54.523503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.501 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.770 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.770 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.770 "name": "raid_bdev1", 00:19:53.770 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:53.770 "strip_size_kb": 0, 00:19:53.770 "state": "online", 00:19:53.770 "raid_level": "raid1", 00:19:53.770 "superblock": true, 00:19:53.770 "num_base_bdevs": 2, 00:19:53.770 "num_base_bdevs_discovered": 2, 00:19:53.770 "num_base_bdevs_operational": 2, 00:19:53.770 "base_bdevs_list": [ 00:19:53.770 { 00:19:53.770 "name": "spare", 00:19:53.770 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:53.770 "is_configured": true, 00:19:53.770 "data_offset": 256, 00:19:53.770 "data_size": 7936 00:19:53.770 }, 00:19:53.770 { 00:19:53.770 "name": "BaseBdev2", 00:19:53.770 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:53.770 "is_configured": true, 00:19:53.770 "data_offset": 256, 00:19:53.770 "data_size": 7936 00:19:53.770 } 00:19:53.770 ] 00:19:53.770 }' 00:19:53.770 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.770 14:36:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.028 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.028 "name": "raid_bdev1", 00:19:54.028 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:54.028 "strip_size_kb": 0, 00:19:54.028 "state": "online", 00:19:54.028 "raid_level": "raid1", 00:19:54.028 "superblock": true, 00:19:54.028 "num_base_bdevs": 2, 00:19:54.028 "num_base_bdevs_discovered": 2, 00:19:54.028 "num_base_bdevs_operational": 2, 00:19:54.028 "base_bdevs_list": [ 00:19:54.028 { 00:19:54.028 "name": "spare", 00:19:54.028 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:54.029 "is_configured": true, 00:19:54.029 "data_offset": 256, 00:19:54.029 "data_size": 7936 00:19:54.029 }, 00:19:54.029 { 00:19:54.029 "name": "BaseBdev2", 00:19:54.029 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:54.029 "is_configured": true, 00:19:54.029 "data_offset": 256, 00:19:54.029 "data_size": 7936 00:19:54.029 } 00:19:54.029 ] 00:19:54.029 }' 00:19:54.029 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 [2024-11-20 14:36:55.215773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.287 "name": "raid_bdev1", 00:19:54.287 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:54.287 "strip_size_kb": 0, 00:19:54.287 "state": "online", 00:19:54.287 "raid_level": "raid1", 00:19:54.287 "superblock": true, 00:19:54.287 "num_base_bdevs": 2, 00:19:54.287 "num_base_bdevs_discovered": 1, 00:19:54.287 "num_base_bdevs_operational": 1, 00:19:54.287 "base_bdevs_list": [ 00:19:54.287 { 00:19:54.287 "name": null, 00:19:54.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.287 "is_configured": false, 00:19:54.287 "data_offset": 0, 00:19:54.287 "data_size": 7936 00:19:54.287 }, 00:19:54.287 { 00:19:54.287 "name": "BaseBdev2", 00:19:54.287 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:54.287 "is_configured": true, 00:19:54.287 "data_offset": 256, 00:19:54.287 "data_size": 7936 00:19:54.287 } 00:19:54.287 ] 00:19:54.287 }' 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.287 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.853 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:54.853 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.853 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.853 [2024-11-20 14:36:55.735983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.853 [2024-11-20 14:36:55.736290] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:54.853 [2024-11-20 14:36:55.736317] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:54.854 [2024-11-20 14:36:55.736381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:54.854 [2024-11-20 14:36:55.749431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:54.854 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.854 14:36:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:54.854 [2024-11-20 14:36:55.752392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.788 "name": "raid_bdev1", 00:19:55.788 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:55.788 "strip_size_kb": 0, 00:19:55.788 "state": "online", 00:19:55.788 "raid_level": "raid1", 00:19:55.788 "superblock": true, 00:19:55.788 "num_base_bdevs": 2, 00:19:55.788 "num_base_bdevs_discovered": 2, 00:19:55.788 "num_base_bdevs_operational": 2, 00:19:55.788 "process": { 00:19:55.788 "type": "rebuild", 00:19:55.788 "target": "spare", 00:19:55.788 "progress": { 00:19:55.788 "blocks": 2560, 00:19:55.788 "percent": 32 00:19:55.788 } 00:19:55.788 }, 00:19:55.788 "base_bdevs_list": [ 00:19:55.788 { 00:19:55.788 "name": "spare", 00:19:55.788 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:55.788 "is_configured": true, 00:19:55.788 "data_offset": 256, 00:19:55.788 "data_size": 7936 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "name": "BaseBdev2", 00:19:55.788 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:55.788 "is_configured": true, 00:19:55.788 "data_offset": 256, 00:19:55.788 "data_size": 7936 00:19:55.788 } 00:19:55.788 ] 00:19:55.788 }' 00:19:55.788 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.046 [2024-11-20 14:36:56.917789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.046 [2024-11-20 14:36:56.962739] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:56.046 [2024-11-20 14:36:56.962952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.046 [2024-11-20 14:36:56.962981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:56.046 [2024-11-20 14:36:56.963009] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.046 14:36:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.046 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.046 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.046 "name": "raid_bdev1", 00:19:56.046 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:56.046 "strip_size_kb": 0, 00:19:56.046 "state": "online", 00:19:56.046 "raid_level": "raid1", 00:19:56.046 "superblock": true, 00:19:56.046 "num_base_bdevs": 2, 00:19:56.046 "num_base_bdevs_discovered": 1, 00:19:56.046 "num_base_bdevs_operational": 1, 00:19:56.046 "base_bdevs_list": [ 00:19:56.046 { 00:19:56.046 "name": null, 00:19:56.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.046 "is_configured": false, 00:19:56.046 "data_offset": 0, 00:19:56.046 "data_size": 7936 00:19:56.046 }, 00:19:56.046 { 00:19:56.046 "name": "BaseBdev2", 00:19:56.046 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:56.046 "is_configured": true, 00:19:56.046 "data_offset": 256, 00:19:56.046 "data_size": 7936 00:19:56.046 } 00:19:56.046 ] 00:19:56.046 }' 00:19:56.046 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.046 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.612 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:56.612 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.612 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.612 [2024-11-20 14:36:57.509486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:56.612 [2024-11-20 14:36:57.509781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.612 [2024-11-20 14:36:57.509843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:56.612 [2024-11-20 14:36:57.509865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.612 [2024-11-20 14:36:57.510329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.612 [2024-11-20 14:36:57.510361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:56.612 [2024-11-20 14:36:57.510451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:56.612 [2024-11-20 14:36:57.510487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:56.612 [2024-11-20 14:36:57.510502] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:56.612 [2024-11-20 14:36:57.510561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.612 [2024-11-20 14:36:57.523496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:56.612 spare 00:19:56.612 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.612 14:36:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:56.612 [2024-11-20 14:36:57.526343] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.547 "name": "raid_bdev1", 00:19:57.547 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:57.547 "strip_size_kb": 0, 00:19:57.547 "state": "online", 00:19:57.547 "raid_level": "raid1", 00:19:57.547 "superblock": true, 00:19:57.547 "num_base_bdevs": 2, 00:19:57.547 "num_base_bdevs_discovered": 2, 00:19:57.547 "num_base_bdevs_operational": 2, 00:19:57.547 "process": { 00:19:57.547 "type": "rebuild", 00:19:57.547 "target": "spare", 00:19:57.547 "progress": { 00:19:57.547 "blocks": 2560, 00:19:57.547 "percent": 32 00:19:57.547 } 00:19:57.547 }, 00:19:57.547 "base_bdevs_list": [ 00:19:57.547 { 00:19:57.547 "name": "spare", 00:19:57.547 "uuid": "05e93c02-a7ea-54e8-95c4-7cd4fd7c1399", 00:19:57.547 "is_configured": true, 00:19:57.547 "data_offset": 256, 00:19:57.547 "data_size": 7936 00:19:57.547 }, 00:19:57.547 { 00:19:57.547 "name": "BaseBdev2", 00:19:57.547 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:57.547 "is_configured": true, 00:19:57.547 "data_offset": 256, 00:19:57.547 "data_size": 7936 00:19:57.547 } 00:19:57.547 ] 00:19:57.547 }' 00:19:57.547 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.805 [2024-11-20 14:36:58.692179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.805 [2024-11-20 14:36:58.736082] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:57.805 [2024-11-20 14:36:58.736373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.805 [2024-11-20 14:36:58.736408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.805 [2024-11-20 14:36:58.736421] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.805 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.805 "name": "raid_bdev1", 00:19:57.805 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:57.805 "strip_size_kb": 0, 00:19:57.806 "state": "online", 00:19:57.806 "raid_level": "raid1", 00:19:57.806 "superblock": true, 00:19:57.806 "num_base_bdevs": 2, 00:19:57.806 "num_base_bdevs_discovered": 1, 00:19:57.806 "num_base_bdevs_operational": 1, 00:19:57.806 "base_bdevs_list": [ 00:19:57.806 { 00:19:57.806 "name": null, 00:19:57.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.806 "is_configured": false, 00:19:57.806 "data_offset": 0, 00:19:57.806 "data_size": 7936 00:19:57.806 }, 00:19:57.806 { 00:19:57.806 "name": "BaseBdev2", 00:19:57.806 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:57.806 "is_configured": true, 00:19:57.806 "data_offset": 256, 00:19:57.806 "data_size": 7936 00:19:57.806 } 00:19:57.806 ] 00:19:57.806 }' 00:19:57.806 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.806 14:36:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.371 "name": "raid_bdev1", 00:19:58.371 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:58.371 "strip_size_kb": 0, 00:19:58.371 "state": "online", 00:19:58.371 "raid_level": "raid1", 00:19:58.371 "superblock": true, 00:19:58.371 "num_base_bdevs": 2, 00:19:58.371 "num_base_bdevs_discovered": 1, 00:19:58.371 "num_base_bdevs_operational": 1, 00:19:58.371 "base_bdevs_list": [ 00:19:58.371 { 00:19:58.371 "name": null, 00:19:58.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.371 "is_configured": false, 00:19:58.371 "data_offset": 0, 00:19:58.371 "data_size": 7936 00:19:58.371 }, 00:19:58.371 { 00:19:58.371 "name": "BaseBdev2", 00:19:58.371 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:58.371 "is_configured": true, 00:19:58.371 "data_offset": 256, 00:19:58.371 "data_size": 7936 00:19:58.371 } 00:19:58.371 ] 00:19:58.371 }' 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.371 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:58.629 [2024-11-20 14:36:59.470564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:58.629 [2024-11-20 14:36:59.470845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:58.629 [2024-11-20 14:36:59.470895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:58.629 [2024-11-20 14:36:59.470913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:58.629 [2024-11-20 14:36:59.471263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:58.629 [2024-11-20 14:36:59.471293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:58.629 [2024-11-20 14:36:59.471395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:58.629 [2024-11-20 14:36:59.471413] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:58.629 [2024-11-20 14:36:59.471429] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:58.629 [2024-11-20 14:36:59.471441] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:58.629 BaseBdev1 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.629 14:36:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.565 "name": "raid_bdev1", 00:19:59.565 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:19:59.565 "strip_size_kb": 0, 00:19:59.565 "state": "online", 00:19:59.565 "raid_level": "raid1", 00:19:59.565 "superblock": true, 00:19:59.565 "num_base_bdevs": 2, 00:19:59.565 "num_base_bdevs_discovered": 1, 00:19:59.565 "num_base_bdevs_operational": 1, 00:19:59.565 "base_bdevs_list": [ 00:19:59.565 { 00:19:59.565 "name": null, 00:19:59.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.565 "is_configured": false, 00:19:59.565 "data_offset": 0, 00:19:59.565 "data_size": 7936 00:19:59.565 }, 00:19:59.565 { 00:19:59.565 "name": "BaseBdev2", 00:19:59.565 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:19:59.565 "is_configured": true, 00:19:59.565 "data_offset": 256, 00:19:59.565 "data_size": 7936 00:19:59.565 } 00:19:59.565 ] 00:19:59.565 }' 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.565 14:37:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.132 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.132 "name": "raid_bdev1", 00:20:00.132 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:20:00.132 "strip_size_kb": 0, 00:20:00.132 "state": "online", 00:20:00.133 "raid_level": "raid1", 00:20:00.133 "superblock": true, 00:20:00.133 "num_base_bdevs": 2, 00:20:00.133 "num_base_bdevs_discovered": 1, 00:20:00.133 "num_base_bdevs_operational": 1, 00:20:00.133 "base_bdevs_list": [ 00:20:00.133 { 00:20:00.133 "name": null, 00:20:00.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.133 "is_configured": false, 00:20:00.133 "data_offset": 0, 00:20:00.133 "data_size": 7936 00:20:00.133 }, 00:20:00.133 { 00:20:00.133 "name": "BaseBdev2", 00:20:00.133 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:20:00.133 "is_configured": true, 00:20:00.133 "data_offset": 256, 00:20:00.133 "data_size": 7936 00:20:00.133 } 00:20:00.133 ] 00:20:00.133 }' 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:00.133 [2024-11-20 14:37:01.175223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.133 [2024-11-20 14:37:01.175581] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:00.133 [2024-11-20 14:37:01.175617] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:00.133 request: 00:20:00.133 { 00:20:00.133 "base_bdev": "BaseBdev1", 00:20:00.133 "raid_bdev": "raid_bdev1", 00:20:00.133 "method": "bdev_raid_add_base_bdev", 00:20:00.133 "req_id": 1 00:20:00.133 } 00:20:00.133 Got JSON-RPC error response 00:20:00.133 response: 00:20:00.133 { 00:20:00.133 "code": -22, 00:20:00.133 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:00.133 } 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:00.133 14:37:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.509 "name": "raid_bdev1", 00:20:01.509 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:20:01.509 "strip_size_kb": 0, 00:20:01.509 "state": "online", 00:20:01.509 "raid_level": "raid1", 00:20:01.509 "superblock": true, 00:20:01.509 "num_base_bdevs": 2, 00:20:01.509 "num_base_bdevs_discovered": 1, 00:20:01.509 "num_base_bdevs_operational": 1, 00:20:01.509 "base_bdevs_list": [ 00:20:01.509 { 00:20:01.509 "name": null, 00:20:01.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.509 "is_configured": false, 00:20:01.509 "data_offset": 0, 00:20:01.509 "data_size": 7936 00:20:01.509 }, 00:20:01.509 { 00:20:01.509 "name": "BaseBdev2", 00:20:01.509 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:20:01.509 "is_configured": true, 00:20:01.509 "data_offset": 256, 00:20:01.509 "data_size": 7936 00:20:01.509 } 00:20:01.509 ] 00:20:01.509 }' 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.509 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.768 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:01.768 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.768 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:01.768 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:01.768 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.769 "name": "raid_bdev1", 00:20:01.769 "uuid": "6b1164ea-15fc-4f9e-acaf-628daaece176", 00:20:01.769 "strip_size_kb": 0, 00:20:01.769 "state": "online", 00:20:01.769 "raid_level": "raid1", 00:20:01.769 "superblock": true, 00:20:01.769 "num_base_bdevs": 2, 00:20:01.769 "num_base_bdevs_discovered": 1, 00:20:01.769 "num_base_bdevs_operational": 1, 00:20:01.769 "base_bdevs_list": [ 00:20:01.769 { 00:20:01.769 "name": null, 00:20:01.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.769 "is_configured": false, 00:20:01.769 "data_offset": 0, 00:20:01.769 "data_size": 7936 00:20:01.769 }, 00:20:01.769 { 00:20:01.769 "name": "BaseBdev2", 00:20:01.769 "uuid": "145b7d23-b915-5dda-bf33-233102768acf", 00:20:01.769 "is_configured": true, 00:20:01.769 "data_offset": 256, 00:20:01.769 "data_size": 7936 00:20:01.769 } 00:20:01.769 ] 00:20:01.769 }' 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:01.769 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88302 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88302 ']' 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88302 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88302 00:20:02.029 killing process with pid 88302 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88302' 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88302 00:20:02.029 Received shutdown signal, test time was about 60.000000 seconds 00:20:02.029 00:20:02.029 Latency(us) 00:20:02.029 [2024-11-20T14:37:03.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.029 [2024-11-20T14:37:03.086Z] =================================================================================================================== 00:20:02.029 [2024-11-20T14:37:03.086Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.029 14:37:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88302 00:20:02.029 [2024-11-20 14:37:02.883559] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:02.029 [2024-11-20 14:37:02.883764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.029 [2024-11-20 14:37:02.883834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.029 [2024-11-20 14:37:02.883855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:02.288 [2024-11-20 14:37:03.171619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:03.225 14:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:03.225 00:20:03.225 real 0m21.517s 00:20:03.225 user 0m29.226s 00:20:03.225 sys 0m2.482s 00:20:03.225 14:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.225 14:37:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:03.225 ************************************ 00:20:03.225 END TEST raid_rebuild_test_sb_md_separate 00:20:03.225 ************************************ 00:20:03.225 14:37:04 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:03.225 14:37:04 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:03.225 14:37:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:03.225 14:37:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.225 14:37:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:03.484 ************************************ 00:20:03.484 START TEST raid_state_function_test_sb_md_interleaved 00:20:03.484 ************************************ 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89004 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89004' 00:20:03.484 Process raid pid: 89004 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89004 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89004 ']' 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.484 14:37:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.484 [2024-11-20 14:37:04.389134] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:03.484 [2024-11-20 14:37:04.389284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.743 [2024-11-20 14:37:04.564176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.743 [2024-11-20 14:37:04.697223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.001 [2024-11-20 14:37:04.910571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.001 [2024-11-20 14:37:04.910625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.570 [2024-11-20 14:37:05.394941] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:04.570 [2024-11-20 14:37:05.395016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:04.570 [2024-11-20 14:37:05.395052] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:04.570 [2024-11-20 14:37:05.395069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.570 "name": "Existed_Raid", 00:20:04.570 "uuid": "7ff2f71f-7afc-49a6-b717-5e04c9f76063", 00:20:04.570 "strip_size_kb": 0, 00:20:04.570 "state": "configuring", 00:20:04.570 "raid_level": "raid1", 00:20:04.570 "superblock": true, 00:20:04.570 "num_base_bdevs": 2, 00:20:04.570 "num_base_bdevs_discovered": 0, 00:20:04.570 "num_base_bdevs_operational": 2, 00:20:04.570 "base_bdevs_list": [ 00:20:04.570 { 00:20:04.570 "name": "BaseBdev1", 00:20:04.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.570 "is_configured": false, 00:20:04.570 "data_offset": 0, 00:20:04.570 "data_size": 0 00:20:04.570 }, 00:20:04.570 { 00:20:04.570 "name": "BaseBdev2", 00:20:04.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.570 "is_configured": false, 00:20:04.570 "data_offset": 0, 00:20:04.570 "data_size": 0 00:20:04.570 } 00:20:04.570 ] 00:20:04.570 }' 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.570 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.137 [2024-11-20 14:37:05.927003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:05.137 [2024-11-20 14:37:05.927050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.137 [2024-11-20 14:37:05.934983] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:05.137 [2024-11-20 14:37:05.935176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:05.137 [2024-11-20 14:37:05.935302] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:05.137 [2024-11-20 14:37:05.935478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.137 [2024-11-20 14:37:05.980614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.137 BaseBdev1 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.137 14:37:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.137 [ 00:20:05.137 { 00:20:05.137 "name": "BaseBdev1", 00:20:05.137 "aliases": [ 00:20:05.137 "f7b0eab2-9f3e-428a-b34d-24a02c6ad2f1" 00:20:05.137 ], 00:20:05.137 "product_name": "Malloc disk", 00:20:05.137 "block_size": 4128, 00:20:05.137 "num_blocks": 8192, 00:20:05.137 "uuid": "f7b0eab2-9f3e-428a-b34d-24a02c6ad2f1", 00:20:05.137 "md_size": 32, 00:20:05.137 "md_interleave": true, 00:20:05.137 "dif_type": 0, 00:20:05.137 "assigned_rate_limits": { 00:20:05.137 "rw_ios_per_sec": 0, 00:20:05.137 "rw_mbytes_per_sec": 0, 00:20:05.137 "r_mbytes_per_sec": 0, 00:20:05.137 "w_mbytes_per_sec": 0 00:20:05.137 }, 00:20:05.137 "claimed": true, 00:20:05.137 "claim_type": "exclusive_write", 00:20:05.137 "zoned": false, 00:20:05.137 "supported_io_types": { 00:20:05.137 "read": true, 00:20:05.137 "write": true, 00:20:05.137 "unmap": true, 00:20:05.137 "flush": true, 00:20:05.137 "reset": true, 00:20:05.137 "nvme_admin": false, 00:20:05.137 "nvme_io": false, 00:20:05.137 "nvme_io_md": false, 00:20:05.137 "write_zeroes": true, 00:20:05.137 "zcopy": true, 00:20:05.137 "get_zone_info": false, 00:20:05.137 "zone_management": false, 00:20:05.137 "zone_append": false, 00:20:05.137 "compare": false, 00:20:05.137 "compare_and_write": false, 00:20:05.137 "abort": true, 00:20:05.137 "seek_hole": false, 00:20:05.138 "seek_data": false, 00:20:05.138 "copy": true, 00:20:05.138 "nvme_iov_md": false 00:20:05.138 }, 00:20:05.138 "memory_domains": [ 00:20:05.138 { 00:20:05.138 "dma_device_id": "system", 00:20:05.138 "dma_device_type": 1 00:20:05.138 }, 00:20:05.138 { 00:20:05.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.138 "dma_device_type": 2 00:20:05.138 } 00:20:05.138 ], 00:20:05.138 "driver_specific": {} 00:20:05.138 } 00:20:05.138 ] 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.138 "name": "Existed_Raid", 00:20:05.138 "uuid": "541a8bd4-82ae-4c77-b009-bf62da44119f", 00:20:05.138 "strip_size_kb": 0, 00:20:05.138 "state": "configuring", 00:20:05.138 "raid_level": "raid1", 00:20:05.138 "superblock": true, 00:20:05.138 "num_base_bdevs": 2, 00:20:05.138 "num_base_bdevs_discovered": 1, 00:20:05.138 "num_base_bdevs_operational": 2, 00:20:05.138 "base_bdevs_list": [ 00:20:05.138 { 00:20:05.138 "name": "BaseBdev1", 00:20:05.138 "uuid": "f7b0eab2-9f3e-428a-b34d-24a02c6ad2f1", 00:20:05.138 "is_configured": true, 00:20:05.138 "data_offset": 256, 00:20:05.138 "data_size": 7936 00:20:05.138 }, 00:20:05.138 { 00:20:05.138 "name": "BaseBdev2", 00:20:05.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.138 "is_configured": false, 00:20:05.138 "data_offset": 0, 00:20:05.138 "data_size": 0 00:20:05.138 } 00:20:05.138 ] 00:20:05.138 }' 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.138 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.704 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:05.704 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.704 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.705 [2024-11-20 14:37:06.536906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:05.705 [2024-11-20 14:37:06.537099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.705 [2024-11-20 14:37:06.544951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.705 [2024-11-20 14:37:06.547609] bdev.c:8482:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:05.705 [2024-11-20 14:37:06.547803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.705 "name": "Existed_Raid", 00:20:05.705 "uuid": "8313f470-b3ba-4d75-985b-a4430a54ea03", 00:20:05.705 "strip_size_kb": 0, 00:20:05.705 "state": "configuring", 00:20:05.705 "raid_level": "raid1", 00:20:05.705 "superblock": true, 00:20:05.705 "num_base_bdevs": 2, 00:20:05.705 "num_base_bdevs_discovered": 1, 00:20:05.705 "num_base_bdevs_operational": 2, 00:20:05.705 "base_bdevs_list": [ 00:20:05.705 { 00:20:05.705 "name": "BaseBdev1", 00:20:05.705 "uuid": "f7b0eab2-9f3e-428a-b34d-24a02c6ad2f1", 00:20:05.705 "is_configured": true, 00:20:05.705 "data_offset": 256, 00:20:05.705 "data_size": 7936 00:20:05.705 }, 00:20:05.705 { 00:20:05.705 "name": "BaseBdev2", 00:20:05.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.705 "is_configured": false, 00:20:05.705 "data_offset": 0, 00:20:05.705 "data_size": 0 00:20:05.705 } 00:20:05.705 ] 00:20:05.705 }' 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.705 14:37:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.272 [2024-11-20 14:37:07.088090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.272 BaseBdev2 00:20:06.272 [2024-11-20 14:37:07.088583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:06.272 [2024-11-20 14:37:07.088609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:06.272 [2024-11-20 14:37:07.088742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:06.272 [2024-11-20 14:37:07.088848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:06.272 [2024-11-20 14:37:07.088869] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:06.272 [2024-11-20 14:37:07.088957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.272 [ 00:20:06.272 { 00:20:06.272 "name": "BaseBdev2", 00:20:06.272 "aliases": [ 00:20:06.272 "2b30dd46-f60e-4f9d-8312-a50db9798df6" 00:20:06.272 ], 00:20:06.272 "product_name": "Malloc disk", 00:20:06.272 "block_size": 4128, 00:20:06.272 "num_blocks": 8192, 00:20:06.272 "uuid": "2b30dd46-f60e-4f9d-8312-a50db9798df6", 00:20:06.272 "md_size": 32, 00:20:06.272 "md_interleave": true, 00:20:06.272 "dif_type": 0, 00:20:06.272 "assigned_rate_limits": { 00:20:06.272 "rw_ios_per_sec": 0, 00:20:06.272 "rw_mbytes_per_sec": 0, 00:20:06.272 "r_mbytes_per_sec": 0, 00:20:06.272 "w_mbytes_per_sec": 0 00:20:06.272 }, 00:20:06.272 "claimed": true, 00:20:06.272 "claim_type": "exclusive_write", 00:20:06.272 "zoned": false, 00:20:06.272 "supported_io_types": { 00:20:06.272 "read": true, 00:20:06.272 "write": true, 00:20:06.272 "unmap": true, 00:20:06.272 "flush": true, 00:20:06.272 "reset": true, 00:20:06.272 "nvme_admin": false, 00:20:06.272 "nvme_io": false, 00:20:06.272 "nvme_io_md": false, 00:20:06.272 "write_zeroes": true, 00:20:06.272 "zcopy": true, 00:20:06.272 "get_zone_info": false, 00:20:06.272 "zone_management": false, 00:20:06.272 "zone_append": false, 00:20:06.272 "compare": false, 00:20:06.272 "compare_and_write": false, 00:20:06.272 "abort": true, 00:20:06.272 "seek_hole": false, 00:20:06.272 "seek_data": false, 00:20:06.272 "copy": true, 00:20:06.272 "nvme_iov_md": false 00:20:06.272 }, 00:20:06.272 "memory_domains": [ 00:20:06.272 { 00:20:06.272 "dma_device_id": "system", 00:20:06.272 "dma_device_type": 1 00:20:06.272 }, 00:20:06.272 { 00:20:06.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.272 "dma_device_type": 2 00:20:06.272 } 00:20:06.272 ], 00:20:06.272 "driver_specific": {} 00:20:06.272 } 00:20:06.272 ] 00:20:06.272 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.273 "name": "Existed_Raid", 00:20:06.273 "uuid": "8313f470-b3ba-4d75-985b-a4430a54ea03", 00:20:06.273 "strip_size_kb": 0, 00:20:06.273 "state": "online", 00:20:06.273 "raid_level": "raid1", 00:20:06.273 "superblock": true, 00:20:06.273 "num_base_bdevs": 2, 00:20:06.273 "num_base_bdevs_discovered": 2, 00:20:06.273 "num_base_bdevs_operational": 2, 00:20:06.273 "base_bdevs_list": [ 00:20:06.273 { 00:20:06.273 "name": "BaseBdev1", 00:20:06.273 "uuid": "f7b0eab2-9f3e-428a-b34d-24a02c6ad2f1", 00:20:06.273 "is_configured": true, 00:20:06.273 "data_offset": 256, 00:20:06.273 "data_size": 7936 00:20:06.273 }, 00:20:06.273 { 00:20:06.273 "name": "BaseBdev2", 00:20:06.273 "uuid": "2b30dd46-f60e-4f9d-8312-a50db9798df6", 00:20:06.273 "is_configured": true, 00:20:06.273 "data_offset": 256, 00:20:06.273 "data_size": 7936 00:20:06.273 } 00:20:06.273 ] 00:20:06.273 }' 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.273 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.841 [2024-11-20 14:37:07.636734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.841 "name": "Existed_Raid", 00:20:06.841 "aliases": [ 00:20:06.841 "8313f470-b3ba-4d75-985b-a4430a54ea03" 00:20:06.841 ], 00:20:06.841 "product_name": "Raid Volume", 00:20:06.841 "block_size": 4128, 00:20:06.841 "num_blocks": 7936, 00:20:06.841 "uuid": "8313f470-b3ba-4d75-985b-a4430a54ea03", 00:20:06.841 "md_size": 32, 00:20:06.841 "md_interleave": true, 00:20:06.841 "dif_type": 0, 00:20:06.841 "assigned_rate_limits": { 00:20:06.841 "rw_ios_per_sec": 0, 00:20:06.841 "rw_mbytes_per_sec": 0, 00:20:06.841 "r_mbytes_per_sec": 0, 00:20:06.841 "w_mbytes_per_sec": 0 00:20:06.841 }, 00:20:06.841 "claimed": false, 00:20:06.841 "zoned": false, 00:20:06.841 "supported_io_types": { 00:20:06.841 "read": true, 00:20:06.841 "write": true, 00:20:06.841 "unmap": false, 00:20:06.841 "flush": false, 00:20:06.841 "reset": true, 00:20:06.841 "nvme_admin": false, 00:20:06.841 "nvme_io": false, 00:20:06.841 "nvme_io_md": false, 00:20:06.841 "write_zeroes": true, 00:20:06.841 "zcopy": false, 00:20:06.841 "get_zone_info": false, 00:20:06.841 "zone_management": false, 00:20:06.841 "zone_append": false, 00:20:06.841 "compare": false, 00:20:06.841 "compare_and_write": false, 00:20:06.841 "abort": false, 00:20:06.841 "seek_hole": false, 00:20:06.841 "seek_data": false, 00:20:06.841 "copy": false, 00:20:06.841 "nvme_iov_md": false 00:20:06.841 }, 00:20:06.841 "memory_domains": [ 00:20:06.841 { 00:20:06.841 "dma_device_id": "system", 00:20:06.841 "dma_device_type": 1 00:20:06.841 }, 00:20:06.841 { 00:20:06.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.841 "dma_device_type": 2 00:20:06.841 }, 00:20:06.841 { 00:20:06.841 "dma_device_id": "system", 00:20:06.841 "dma_device_type": 1 00:20:06.841 }, 00:20:06.841 { 00:20:06.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.841 "dma_device_type": 2 00:20:06.841 } 00:20:06.841 ], 00:20:06.841 "driver_specific": { 00:20:06.841 "raid": { 00:20:06.841 "uuid": "8313f470-b3ba-4d75-985b-a4430a54ea03", 00:20:06.841 "strip_size_kb": 0, 00:20:06.841 "state": "online", 00:20:06.841 "raid_level": "raid1", 00:20:06.841 "superblock": true, 00:20:06.841 "num_base_bdevs": 2, 00:20:06.841 "num_base_bdevs_discovered": 2, 00:20:06.841 "num_base_bdevs_operational": 2, 00:20:06.841 "base_bdevs_list": [ 00:20:06.841 { 00:20:06.841 "name": "BaseBdev1", 00:20:06.841 "uuid": "f7b0eab2-9f3e-428a-b34d-24a02c6ad2f1", 00:20:06.841 "is_configured": true, 00:20:06.841 "data_offset": 256, 00:20:06.841 "data_size": 7936 00:20:06.841 }, 00:20:06.841 { 00:20:06.841 "name": "BaseBdev2", 00:20:06.841 "uuid": "2b30dd46-f60e-4f9d-8312-a50db9798df6", 00:20:06.841 "is_configured": true, 00:20:06.841 "data_offset": 256, 00:20:06.841 "data_size": 7936 00:20:06.841 } 00:20:06.841 ] 00:20:06.841 } 00:20:06.841 } 00:20:06.841 }' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:06.841 BaseBdev2' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.841 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.101 [2024-11-20 14:37:07.904435] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.101 14:37:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.101 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.101 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.101 "name": "Existed_Raid", 00:20:07.101 "uuid": "8313f470-b3ba-4d75-985b-a4430a54ea03", 00:20:07.101 "strip_size_kb": 0, 00:20:07.101 "state": "online", 00:20:07.101 "raid_level": "raid1", 00:20:07.101 "superblock": true, 00:20:07.101 "num_base_bdevs": 2, 00:20:07.101 "num_base_bdevs_discovered": 1, 00:20:07.101 "num_base_bdevs_operational": 1, 00:20:07.101 "base_bdevs_list": [ 00:20:07.101 { 00:20:07.101 "name": null, 00:20:07.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.101 "is_configured": false, 00:20:07.101 "data_offset": 0, 00:20:07.101 "data_size": 7936 00:20:07.101 }, 00:20:07.101 { 00:20:07.101 "name": "BaseBdev2", 00:20:07.101 "uuid": "2b30dd46-f60e-4f9d-8312-a50db9798df6", 00:20:07.101 "is_configured": true, 00:20:07.101 "data_offset": 256, 00:20:07.101 "data_size": 7936 00:20:07.101 } 00:20:07.101 ] 00:20:07.101 }' 00:20:07.101 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.101 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.668 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.668 [2024-11-20 14:37:08.578421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:07.669 [2024-11-20 14:37:08.578743] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:07.669 [2024-11-20 14:37:08.663944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:07.669 [2024-11-20 14:37:08.664253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:07.669 [2024-11-20 14:37:08.664290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89004 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89004 ']' 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89004 00:20:07.669 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:07.928 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.928 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89004 00:20:07.928 killing process with pid 89004 00:20:07.928 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.928 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.928 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89004' 00:20:07.928 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89004 00:20:07.928 [2024-11-20 14:37:08.755597] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:07.928 14:37:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89004 00:20:07.928 [2024-11-20 14:37:08.770574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:08.864 ************************************ 00:20:08.864 END TEST raid_state_function_test_sb_md_interleaved 00:20:08.864 ************************************ 00:20:08.864 14:37:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:08.864 00:20:08.864 real 0m5.554s 00:20:08.864 user 0m8.380s 00:20:08.864 sys 0m0.811s 00:20:08.864 14:37:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:08.864 14:37:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.864 14:37:09 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:08.864 14:37:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:08.864 14:37:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:08.864 14:37:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:08.864 ************************************ 00:20:08.864 START TEST raid_superblock_test_md_interleaved 00:20:08.864 ************************************ 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89258 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89258 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89258 ']' 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.864 14:37:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.123 [2024-11-20 14:37:10.013347] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:09.123 [2024-11-20 14:37:10.013537] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89258 ] 00:20:09.382 [2024-11-20 14:37:10.204336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.382 [2024-11-20 14:37:10.359725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.640 [2024-11-20 14:37:10.568721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.640 [2024-11-20 14:37:10.568787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.219 malloc1 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.219 [2024-11-20 14:37:11.060466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.219 [2024-11-20 14:37:11.060732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.219 [2024-11-20 14:37:11.060909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:10.219 [2024-11-20 14:37:11.061063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.219 [2024-11-20 14:37:11.063928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.219 [2024-11-20 14:37:11.064185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.219 pt1 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.219 malloc2 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.219 [2024-11-20 14:37:11.113532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:10.219 [2024-11-20 14:37:11.113615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.219 [2024-11-20 14:37:11.113692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:10.219 [2024-11-20 14:37:11.113709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.219 [2024-11-20 14:37:11.116247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.219 [2024-11-20 14:37:11.116286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:10.219 pt2 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.219 [2024-11-20 14:37:11.121571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.219 [2024-11-20 14:37:11.124182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.219 [2024-11-20 14:37:11.124584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:10.219 [2024-11-20 14:37:11.124770] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:10.219 [2024-11-20 14:37:11.124910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:10.219 [2024-11-20 14:37:11.125147] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:10.219 [2024-11-20 14:37:11.125173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:10.219 [2024-11-20 14:37:11.125265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.219 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.220 "name": "raid_bdev1", 00:20:10.220 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:10.220 "strip_size_kb": 0, 00:20:10.220 "state": "online", 00:20:10.220 "raid_level": "raid1", 00:20:10.220 "superblock": true, 00:20:10.220 "num_base_bdevs": 2, 00:20:10.220 "num_base_bdevs_discovered": 2, 00:20:10.220 "num_base_bdevs_operational": 2, 00:20:10.220 "base_bdevs_list": [ 00:20:10.220 { 00:20:10.220 "name": "pt1", 00:20:10.220 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.220 "is_configured": true, 00:20:10.220 "data_offset": 256, 00:20:10.220 "data_size": 7936 00:20:10.220 }, 00:20:10.220 { 00:20:10.220 "name": "pt2", 00:20:10.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.220 "is_configured": true, 00:20:10.220 "data_offset": 256, 00:20:10.220 "data_size": 7936 00:20:10.220 } 00:20:10.220 ] 00:20:10.220 }' 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.220 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.800 [2024-11-20 14:37:11.646176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.800 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:10.800 "name": "raid_bdev1", 00:20:10.800 "aliases": [ 00:20:10.800 "292e17d9-9717-4689-80e8-51c1f925dd2f" 00:20:10.800 ], 00:20:10.800 "product_name": "Raid Volume", 00:20:10.800 "block_size": 4128, 00:20:10.800 "num_blocks": 7936, 00:20:10.800 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:10.800 "md_size": 32, 00:20:10.800 "md_interleave": true, 00:20:10.800 "dif_type": 0, 00:20:10.800 "assigned_rate_limits": { 00:20:10.800 "rw_ios_per_sec": 0, 00:20:10.800 "rw_mbytes_per_sec": 0, 00:20:10.800 "r_mbytes_per_sec": 0, 00:20:10.800 "w_mbytes_per_sec": 0 00:20:10.800 }, 00:20:10.800 "claimed": false, 00:20:10.800 "zoned": false, 00:20:10.800 "supported_io_types": { 00:20:10.800 "read": true, 00:20:10.800 "write": true, 00:20:10.800 "unmap": false, 00:20:10.800 "flush": false, 00:20:10.800 "reset": true, 00:20:10.800 "nvme_admin": false, 00:20:10.800 "nvme_io": false, 00:20:10.800 "nvme_io_md": false, 00:20:10.800 "write_zeroes": true, 00:20:10.800 "zcopy": false, 00:20:10.800 "get_zone_info": false, 00:20:10.800 "zone_management": false, 00:20:10.800 "zone_append": false, 00:20:10.800 "compare": false, 00:20:10.800 "compare_and_write": false, 00:20:10.800 "abort": false, 00:20:10.800 "seek_hole": false, 00:20:10.800 "seek_data": false, 00:20:10.800 "copy": false, 00:20:10.800 "nvme_iov_md": false 00:20:10.801 }, 00:20:10.801 "memory_domains": [ 00:20:10.801 { 00:20:10.801 "dma_device_id": "system", 00:20:10.801 "dma_device_type": 1 00:20:10.801 }, 00:20:10.801 { 00:20:10.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.801 "dma_device_type": 2 00:20:10.801 }, 00:20:10.801 { 00:20:10.801 "dma_device_id": "system", 00:20:10.801 "dma_device_type": 1 00:20:10.801 }, 00:20:10.801 { 00:20:10.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.801 "dma_device_type": 2 00:20:10.801 } 00:20:10.801 ], 00:20:10.801 "driver_specific": { 00:20:10.801 "raid": { 00:20:10.801 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:10.801 "strip_size_kb": 0, 00:20:10.801 "state": "online", 00:20:10.801 "raid_level": "raid1", 00:20:10.801 "superblock": true, 00:20:10.801 "num_base_bdevs": 2, 00:20:10.801 "num_base_bdevs_discovered": 2, 00:20:10.801 "num_base_bdevs_operational": 2, 00:20:10.801 "base_bdevs_list": [ 00:20:10.801 { 00:20:10.801 "name": "pt1", 00:20:10.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:10.801 "is_configured": true, 00:20:10.801 "data_offset": 256, 00:20:10.801 "data_size": 7936 00:20:10.801 }, 00:20:10.801 { 00:20:10.801 "name": "pt2", 00:20:10.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.801 "is_configured": true, 00:20:10.801 "data_offset": 256, 00:20:10.801 "data_size": 7936 00:20:10.801 } 00:20:10.801 ] 00:20:10.801 } 00:20:10.801 } 00:20:10.801 }' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:10.801 pt2' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:10.801 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.060 [2024-11-20 14:37:11.882102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=292e17d9-9717-4689-80e8-51c1f925dd2f 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 292e17d9-9717-4689-80e8-51c1f925dd2f ']' 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.060 [2024-11-20 14:37:11.933740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.060 [2024-11-20 14:37:11.933902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.060 [2024-11-20 14:37:11.934172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:11.060 [2024-11-20 14:37:11.934386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:11.060 [2024-11-20 14:37:11.934564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.060 [2024-11-20 14:37:12.081817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:11.060 [2024-11-20 14:37:12.084763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:11.060 [2024-11-20 14:37:12.084860] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:11.060 [2024-11-20 14:37:12.084959] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:11.060 [2024-11-20 14:37:12.084986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:11.060 [2024-11-20 14:37:12.085017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:11.060 request: 00:20:11.060 { 00:20:11.060 "name": "raid_bdev1", 00:20:11.060 "raid_level": "raid1", 00:20:11.060 "base_bdevs": [ 00:20:11.060 "malloc1", 00:20:11.060 "malloc2" 00:20:11.060 ], 00:20:11.060 "superblock": false, 00:20:11.060 "method": "bdev_raid_create", 00:20:11.060 "req_id": 1 00:20:11.060 } 00:20:11.060 Got JSON-RPC error response 00:20:11.060 response: 00:20:11.060 { 00:20:11.060 "code": -17, 00:20:11.060 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:11.060 } 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.060 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.061 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.319 [2024-11-20 14:37:12.149814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:11.319 [2024-11-20 14:37:12.150037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.319 [2024-11-20 14:37:12.150175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:11.319 [2024-11-20 14:37:12.150319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.319 [2024-11-20 14:37:12.153140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.319 [2024-11-20 14:37:12.153381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:11.319 [2024-11-20 14:37:12.153565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:11.319 [2024-11-20 14:37:12.153781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:11.319 pt1 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.319 "name": "raid_bdev1", 00:20:11.319 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:11.319 "strip_size_kb": 0, 00:20:11.319 "state": "configuring", 00:20:11.319 "raid_level": "raid1", 00:20:11.319 "superblock": true, 00:20:11.319 "num_base_bdevs": 2, 00:20:11.319 "num_base_bdevs_discovered": 1, 00:20:11.319 "num_base_bdevs_operational": 2, 00:20:11.319 "base_bdevs_list": [ 00:20:11.319 { 00:20:11.319 "name": "pt1", 00:20:11.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.319 "is_configured": true, 00:20:11.319 "data_offset": 256, 00:20:11.319 "data_size": 7936 00:20:11.319 }, 00:20:11.319 { 00:20:11.319 "name": null, 00:20:11.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.319 "is_configured": false, 00:20:11.319 "data_offset": 256, 00:20:11.319 "data_size": 7936 00:20:11.319 } 00:20:11.319 ] 00:20:11.319 }' 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.319 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.887 [2024-11-20 14:37:12.682367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:11.887 [2024-11-20 14:37:12.682710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.887 [2024-11-20 14:37:12.682769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:11.887 [2024-11-20 14:37:12.682789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.887 [2024-11-20 14:37:12.683089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.887 [2024-11-20 14:37:12.683117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:11.887 [2024-11-20 14:37:12.683182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:11.887 [2024-11-20 14:37:12.683215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:11.887 [2024-11-20 14:37:12.683319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:11.887 [2024-11-20 14:37:12.683338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:11.887 [2024-11-20 14:37:12.683420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:11.887 [2024-11-20 14:37:12.683501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:11.887 [2024-11-20 14:37:12.683514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:11.887 [2024-11-20 14:37:12.683590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.887 pt2 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.887 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.887 "name": "raid_bdev1", 00:20:11.887 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:11.887 "strip_size_kb": 0, 00:20:11.887 "state": "online", 00:20:11.887 "raid_level": "raid1", 00:20:11.887 "superblock": true, 00:20:11.887 "num_base_bdevs": 2, 00:20:11.887 "num_base_bdevs_discovered": 2, 00:20:11.887 "num_base_bdevs_operational": 2, 00:20:11.887 "base_bdevs_list": [ 00:20:11.887 { 00:20:11.887 "name": "pt1", 00:20:11.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:11.887 "is_configured": true, 00:20:11.887 "data_offset": 256, 00:20:11.887 "data_size": 7936 00:20:11.887 }, 00:20:11.887 { 00:20:11.887 "name": "pt2", 00:20:11.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:11.887 "is_configured": true, 00:20:11.888 "data_offset": 256, 00:20:11.888 "data_size": 7936 00:20:11.888 } 00:20:11.888 ] 00:20:11.888 }' 00:20:11.888 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.888 14:37:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.146 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.146 [2024-11-20 14:37:13.190901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.405 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.405 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.405 "name": "raid_bdev1", 00:20:12.405 "aliases": [ 00:20:12.405 "292e17d9-9717-4689-80e8-51c1f925dd2f" 00:20:12.405 ], 00:20:12.405 "product_name": "Raid Volume", 00:20:12.405 "block_size": 4128, 00:20:12.405 "num_blocks": 7936, 00:20:12.405 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:12.405 "md_size": 32, 00:20:12.405 "md_interleave": true, 00:20:12.405 "dif_type": 0, 00:20:12.405 "assigned_rate_limits": { 00:20:12.405 "rw_ios_per_sec": 0, 00:20:12.405 "rw_mbytes_per_sec": 0, 00:20:12.405 "r_mbytes_per_sec": 0, 00:20:12.405 "w_mbytes_per_sec": 0 00:20:12.405 }, 00:20:12.405 "claimed": false, 00:20:12.405 "zoned": false, 00:20:12.405 "supported_io_types": { 00:20:12.405 "read": true, 00:20:12.405 "write": true, 00:20:12.405 "unmap": false, 00:20:12.405 "flush": false, 00:20:12.405 "reset": true, 00:20:12.405 "nvme_admin": false, 00:20:12.405 "nvme_io": false, 00:20:12.405 "nvme_io_md": false, 00:20:12.405 "write_zeroes": true, 00:20:12.405 "zcopy": false, 00:20:12.405 "get_zone_info": false, 00:20:12.405 "zone_management": false, 00:20:12.405 "zone_append": false, 00:20:12.405 "compare": false, 00:20:12.405 "compare_and_write": false, 00:20:12.405 "abort": false, 00:20:12.405 "seek_hole": false, 00:20:12.405 "seek_data": false, 00:20:12.405 "copy": false, 00:20:12.405 "nvme_iov_md": false 00:20:12.405 }, 00:20:12.405 "memory_domains": [ 00:20:12.405 { 00:20:12.405 "dma_device_id": "system", 00:20:12.405 "dma_device_type": 1 00:20:12.405 }, 00:20:12.405 { 00:20:12.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.405 "dma_device_type": 2 00:20:12.405 }, 00:20:12.405 { 00:20:12.405 "dma_device_id": "system", 00:20:12.405 "dma_device_type": 1 00:20:12.405 }, 00:20:12.405 { 00:20:12.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.405 "dma_device_type": 2 00:20:12.405 } 00:20:12.405 ], 00:20:12.405 "driver_specific": { 00:20:12.405 "raid": { 00:20:12.405 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:12.405 "strip_size_kb": 0, 00:20:12.405 "state": "online", 00:20:12.405 "raid_level": "raid1", 00:20:12.405 "superblock": true, 00:20:12.405 "num_base_bdevs": 2, 00:20:12.405 "num_base_bdevs_discovered": 2, 00:20:12.405 "num_base_bdevs_operational": 2, 00:20:12.405 "base_bdevs_list": [ 00:20:12.405 { 00:20:12.405 "name": "pt1", 00:20:12.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:12.405 "is_configured": true, 00:20:12.405 "data_offset": 256, 00:20:12.405 "data_size": 7936 00:20:12.405 }, 00:20:12.405 { 00:20:12.405 "name": "pt2", 00:20:12.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.405 "is_configured": true, 00:20:12.405 "data_offset": 256, 00:20:12.405 "data_size": 7936 00:20:12.405 } 00:20:12.405 ] 00:20:12.405 } 00:20:12.405 } 00:20:12.405 }' 00:20:12.405 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.405 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:12.405 pt2' 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:12.406 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.664 [2024-11-20 14:37:13.467182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 292e17d9-9717-4689-80e8-51c1f925dd2f '!=' 292e17d9-9717-4689-80e8-51c1f925dd2f ']' 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.664 [2024-11-20 14:37:13.518702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:12.664 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.665 "name": "raid_bdev1", 00:20:12.665 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:12.665 "strip_size_kb": 0, 00:20:12.665 "state": "online", 00:20:12.665 "raid_level": "raid1", 00:20:12.665 "superblock": true, 00:20:12.665 "num_base_bdevs": 2, 00:20:12.665 "num_base_bdevs_discovered": 1, 00:20:12.665 "num_base_bdevs_operational": 1, 00:20:12.665 "base_bdevs_list": [ 00:20:12.665 { 00:20:12.665 "name": null, 00:20:12.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.665 "is_configured": false, 00:20:12.665 "data_offset": 0, 00:20:12.665 "data_size": 7936 00:20:12.665 }, 00:20:12.665 { 00:20:12.665 "name": "pt2", 00:20:12.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:12.665 "is_configured": true, 00:20:12.665 "data_offset": 256, 00:20:12.665 "data_size": 7936 00:20:12.665 } 00:20:12.665 ] 00:20:12.665 }' 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.665 14:37:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.233 [2024-11-20 14:37:14.058832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.233 [2024-11-20 14:37:14.058871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.233 [2024-11-20 14:37:14.058996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.233 [2024-11-20 14:37:14.059074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.233 [2024-11-20 14:37:14.059096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.233 [2024-11-20 14:37:14.134852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.233 [2024-11-20 14:37:14.135124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.233 [2024-11-20 14:37:14.135161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:13.233 [2024-11-20 14:37:14.135179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.233 [2024-11-20 14:37:14.137966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.233 [2024-11-20 14:37:14.138032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.233 [2024-11-20 14:37:14.138152] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:13.233 [2024-11-20 14:37:14.138271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.233 [2024-11-20 14:37:14.138367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:13.233 [2024-11-20 14:37:14.138389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:13.233 [2024-11-20 14:37:14.138527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:13.233 [2024-11-20 14:37:14.138679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:13.233 [2024-11-20 14:37:14.138694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:13.233 [2024-11-20 14:37:14.138847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.233 pt2 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.233 "name": "raid_bdev1", 00:20:13.233 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:13.233 "strip_size_kb": 0, 00:20:13.233 "state": "online", 00:20:13.233 "raid_level": "raid1", 00:20:13.233 "superblock": true, 00:20:13.233 "num_base_bdevs": 2, 00:20:13.233 "num_base_bdevs_discovered": 1, 00:20:13.233 "num_base_bdevs_operational": 1, 00:20:13.233 "base_bdevs_list": [ 00:20:13.233 { 00:20:13.233 "name": null, 00:20:13.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.233 "is_configured": false, 00:20:13.233 "data_offset": 256, 00:20:13.233 "data_size": 7936 00:20:13.233 }, 00:20:13.233 { 00:20:13.233 "name": "pt2", 00:20:13.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.233 "is_configured": true, 00:20:13.233 "data_offset": 256, 00:20:13.233 "data_size": 7936 00:20:13.233 } 00:20:13.233 ] 00:20:13.233 }' 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.233 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.801 [2024-11-20 14:37:14.674993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.801 [2024-11-20 14:37:14.675215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.801 [2024-11-20 14:37:14.675359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.801 [2024-11-20 14:37:14.675438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.801 [2024-11-20 14:37:14.675456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.801 [2024-11-20 14:37:14.739042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:13.801 [2024-11-20 14:37:14.739313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.801 [2024-11-20 14:37:14.739393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:13.801 [2024-11-20 14:37:14.739662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.801 [2024-11-20 14:37:14.742564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.801 [2024-11-20 14:37:14.742612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:13.801 [2024-11-20 14:37:14.742763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:13.801 [2024-11-20 14:37:14.742823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:13.801 [2024-11-20 14:37:14.742964] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:13.801 [2024-11-20 14:37:14.742982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.801 [2024-11-20 14:37:14.743036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:13.801 [2024-11-20 14:37:14.743101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.801 [2024-11-20 14:37:14.743299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:13.801 [2024-11-20 14:37:14.743322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:13.801 pt1 00:20:13.801 [2024-11-20 14:37:14.743414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:13.801 [2024-11-20 14:37:14.743513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:13.801 [2024-11-20 14:37:14.743537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:13.801 [2024-11-20 14:37:14.743650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.801 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.801 "name": "raid_bdev1", 00:20:13.801 "uuid": "292e17d9-9717-4689-80e8-51c1f925dd2f", 00:20:13.801 "strip_size_kb": 0, 00:20:13.801 "state": "online", 00:20:13.801 "raid_level": "raid1", 00:20:13.801 "superblock": true, 00:20:13.801 "num_base_bdevs": 2, 00:20:13.801 "num_base_bdevs_discovered": 1, 00:20:13.801 "num_base_bdevs_operational": 1, 00:20:13.801 "base_bdevs_list": [ 00:20:13.801 { 00:20:13.801 "name": null, 00:20:13.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.801 "is_configured": false, 00:20:13.801 "data_offset": 256, 00:20:13.801 "data_size": 7936 00:20:13.801 }, 00:20:13.801 { 00:20:13.801 "name": "pt2", 00:20:13.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:13.802 "is_configured": true, 00:20:13.802 "data_offset": 256, 00:20:13.802 "data_size": 7936 00:20:13.802 } 00:20:13.802 ] 00:20:13.802 }' 00:20:13.802 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.802 14:37:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.368 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:14.368 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.368 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.368 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.369 [2024-11-20 14:37:15.339515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 292e17d9-9717-4689-80e8-51c1f925dd2f '!=' 292e17d9-9717-4689-80e8-51c1f925dd2f ']' 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89258 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89258 ']' 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89258 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89258 00:20:14.369 killing process with pid 89258 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89258' 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89258 00:20:14.369 [2024-11-20 14:37:15.422912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.369 14:37:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89258 00:20:14.369 [2024-11-20 14:37:15.423035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.369 [2024-11-20 14:37:15.423127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.369 [2024-11-20 14:37:15.423150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:14.628 [2024-11-20 14:37:15.593978] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:16.005 14:37:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:16.005 00:20:16.005 real 0m6.738s 00:20:16.005 user 0m10.681s 00:20:16.005 sys 0m0.995s 00:20:16.005 14:37:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.005 ************************************ 00:20:16.005 END TEST raid_superblock_test_md_interleaved 00:20:16.005 ************************************ 00:20:16.005 14:37:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.005 14:37:16 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:16.005 14:37:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:16.005 14:37:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.005 14:37:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:16.005 ************************************ 00:20:16.005 START TEST raid_rebuild_test_sb_md_interleaved 00:20:16.005 ************************************ 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89592 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89592 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:16.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89592 ']' 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.005 14:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.005 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:16.005 Zero copy mechanism will not be used. 00:20:16.005 [2024-11-20 14:37:16.811148] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:16.005 [2024-11-20 14:37:16.811340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89592 ] 00:20:16.005 [2024-11-20 14:37:16.991441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.263 [2024-11-20 14:37:17.107277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.263 [2024-11-20 14:37:17.303783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.263 [2024-11-20 14:37:17.303984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.830 BaseBdev1_malloc 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.830 [2024-11-20 14:37:17.787245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:16.830 [2024-11-20 14:37:17.787486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.830 [2024-11-20 14:37:17.787540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:16.830 [2024-11-20 14:37:17.787560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.830 [2024-11-20 14:37:17.790161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.830 [2024-11-20 14:37:17.790252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:16.830 BaseBdev1 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.830 BaseBdev2_malloc 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.830 [2024-11-20 14:37:17.838805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:16.830 [2024-11-20 14:37:17.839054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.830 [2024-11-20 14:37:17.839117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:16.830 [2024-11-20 14:37:17.839137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.830 [2024-11-20 14:37:17.841809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.830 [2024-11-20 14:37:17.841857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:16.830 BaseBdev2 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.830 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.088 spare_malloc 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.088 spare_delay 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.088 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.088 [2024-11-20 14:37:17.911828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:17.088 [2024-11-20 14:37:17.912099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.088 [2024-11-20 14:37:17.912146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:17.088 [2024-11-20 14:37:17.912175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.089 [2024-11-20 14:37:17.914880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.089 [2024-11-20 14:37:17.914929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:17.089 spare 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.089 [2024-11-20 14:37:17.919882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.089 [2024-11-20 14:37:17.922384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.089 [2024-11-20 14:37:17.922704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:17.089 [2024-11-20 14:37:17.922737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:17.089 [2024-11-20 14:37:17.922826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:17.089 [2024-11-20 14:37:17.922939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:17.089 [2024-11-20 14:37:17.922970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:17.089 [2024-11-20 14:37:17.923077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.089 "name": "raid_bdev1", 00:20:17.089 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:17.089 "strip_size_kb": 0, 00:20:17.089 "state": "online", 00:20:17.089 "raid_level": "raid1", 00:20:17.089 "superblock": true, 00:20:17.089 "num_base_bdevs": 2, 00:20:17.089 "num_base_bdevs_discovered": 2, 00:20:17.089 "num_base_bdevs_operational": 2, 00:20:17.089 "base_bdevs_list": [ 00:20:17.089 { 00:20:17.089 "name": "BaseBdev1", 00:20:17.089 "uuid": "b4b1a908-fefb-5bdd-9923-a19c278a396a", 00:20:17.089 "is_configured": true, 00:20:17.089 "data_offset": 256, 00:20:17.089 "data_size": 7936 00:20:17.089 }, 00:20:17.089 { 00:20:17.089 "name": "BaseBdev2", 00:20:17.089 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:17.089 "is_configured": true, 00:20:17.089 "data_offset": 256, 00:20:17.089 "data_size": 7936 00:20:17.089 } 00:20:17.089 ] 00:20:17.089 }' 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.089 14:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.656 [2024-11-20 14:37:18.464419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.656 [2024-11-20 14:37:18.564061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.656 "name": "raid_bdev1", 00:20:17.656 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:17.656 "strip_size_kb": 0, 00:20:17.656 "state": "online", 00:20:17.656 "raid_level": "raid1", 00:20:17.656 "superblock": true, 00:20:17.656 "num_base_bdevs": 2, 00:20:17.656 "num_base_bdevs_discovered": 1, 00:20:17.656 "num_base_bdevs_operational": 1, 00:20:17.656 "base_bdevs_list": [ 00:20:17.656 { 00:20:17.656 "name": null, 00:20:17.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.656 "is_configured": false, 00:20:17.656 "data_offset": 0, 00:20:17.656 "data_size": 7936 00:20:17.656 }, 00:20:17.656 { 00:20:17.656 "name": "BaseBdev2", 00:20:17.656 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:17.656 "is_configured": true, 00:20:17.656 "data_offset": 256, 00:20:17.656 "data_size": 7936 00:20:17.656 } 00:20:17.656 ] 00:20:17.656 }' 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.656 14:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.260 14:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:18.260 14:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.260 14:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.260 [2024-11-20 14:37:19.112306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:18.260 [2024-11-20 14:37:19.129885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:18.260 14:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.260 14:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:18.260 [2024-11-20 14:37:19.132495] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.195 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.196 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.196 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.196 "name": "raid_bdev1", 00:20:19.196 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:19.196 "strip_size_kb": 0, 00:20:19.196 "state": "online", 00:20:19.196 "raid_level": "raid1", 00:20:19.196 "superblock": true, 00:20:19.196 "num_base_bdevs": 2, 00:20:19.196 "num_base_bdevs_discovered": 2, 00:20:19.196 "num_base_bdevs_operational": 2, 00:20:19.196 "process": { 00:20:19.196 "type": "rebuild", 00:20:19.196 "target": "spare", 00:20:19.196 "progress": { 00:20:19.196 "blocks": 2560, 00:20:19.196 "percent": 32 00:20:19.196 } 00:20:19.196 }, 00:20:19.196 "base_bdevs_list": [ 00:20:19.196 { 00:20:19.196 "name": "spare", 00:20:19.196 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:19.196 "is_configured": true, 00:20:19.196 "data_offset": 256, 00:20:19.196 "data_size": 7936 00:20:19.196 }, 00:20:19.196 { 00:20:19.196 "name": "BaseBdev2", 00:20:19.196 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:19.196 "is_configured": true, 00:20:19.196 "data_offset": 256, 00:20:19.196 "data_size": 7936 00:20:19.196 } 00:20:19.196 ] 00:20:19.196 }' 00:20:19.196 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.196 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.196 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.454 [2024-11-20 14:37:20.301523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.454 [2024-11-20 14:37:20.341368] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:19.454 [2024-11-20 14:37:20.341701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.454 [2024-11-20 14:37:20.341951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:19.454 [2024-11-20 14:37:20.342111] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.454 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.454 "name": "raid_bdev1", 00:20:19.454 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:19.454 "strip_size_kb": 0, 00:20:19.454 "state": "online", 00:20:19.454 "raid_level": "raid1", 00:20:19.454 "superblock": true, 00:20:19.454 "num_base_bdevs": 2, 00:20:19.454 "num_base_bdevs_discovered": 1, 00:20:19.454 "num_base_bdevs_operational": 1, 00:20:19.454 "base_bdevs_list": [ 00:20:19.454 { 00:20:19.454 "name": null, 00:20:19.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.454 "is_configured": false, 00:20:19.454 "data_offset": 0, 00:20:19.454 "data_size": 7936 00:20:19.454 }, 00:20:19.454 { 00:20:19.454 "name": "BaseBdev2", 00:20:19.454 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:19.454 "is_configured": true, 00:20:19.454 "data_offset": 256, 00:20:19.454 "data_size": 7936 00:20:19.454 } 00:20:19.454 ] 00:20:19.455 }' 00:20:19.455 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.455 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.022 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.022 "name": "raid_bdev1", 00:20:20.022 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:20.022 "strip_size_kb": 0, 00:20:20.022 "state": "online", 00:20:20.022 "raid_level": "raid1", 00:20:20.022 "superblock": true, 00:20:20.022 "num_base_bdevs": 2, 00:20:20.022 "num_base_bdevs_discovered": 1, 00:20:20.022 "num_base_bdevs_operational": 1, 00:20:20.022 "base_bdevs_list": [ 00:20:20.022 { 00:20:20.022 "name": null, 00:20:20.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.023 "is_configured": false, 00:20:20.023 "data_offset": 0, 00:20:20.023 "data_size": 7936 00:20:20.023 }, 00:20:20.023 { 00:20:20.023 "name": "BaseBdev2", 00:20:20.023 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:20.023 "is_configured": true, 00:20:20.023 "data_offset": 256, 00:20:20.023 "data_size": 7936 00:20:20.023 } 00:20:20.023 ] 00:20:20.023 }' 00:20:20.023 14:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.023 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:20.023 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.023 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:20.023 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:20.023 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.023 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.023 [2024-11-20 14:37:21.069115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:20.281 [2024-11-20 14:37:21.085548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:20.281 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.281 14:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:20.281 [2024-11-20 14:37:21.088170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.217 "name": "raid_bdev1", 00:20:21.217 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:21.217 "strip_size_kb": 0, 00:20:21.217 "state": "online", 00:20:21.217 "raid_level": "raid1", 00:20:21.217 "superblock": true, 00:20:21.217 "num_base_bdevs": 2, 00:20:21.217 "num_base_bdevs_discovered": 2, 00:20:21.217 "num_base_bdevs_operational": 2, 00:20:21.217 "process": { 00:20:21.217 "type": "rebuild", 00:20:21.217 "target": "spare", 00:20:21.217 "progress": { 00:20:21.217 "blocks": 2560, 00:20:21.217 "percent": 32 00:20:21.217 } 00:20:21.217 }, 00:20:21.217 "base_bdevs_list": [ 00:20:21.217 { 00:20:21.217 "name": "spare", 00:20:21.217 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:21.217 "is_configured": true, 00:20:21.217 "data_offset": 256, 00:20:21.217 "data_size": 7936 00:20:21.217 }, 00:20:21.217 { 00:20:21.217 "name": "BaseBdev2", 00:20:21.217 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:21.217 "is_configured": true, 00:20:21.217 "data_offset": 256, 00:20:21.217 "data_size": 7936 00:20:21.217 } 00:20:21.217 ] 00:20:21.217 }' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:21.217 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=804 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.217 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.476 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.476 "name": "raid_bdev1", 00:20:21.476 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:21.476 "strip_size_kb": 0, 00:20:21.476 "state": "online", 00:20:21.476 "raid_level": "raid1", 00:20:21.476 "superblock": true, 00:20:21.476 "num_base_bdevs": 2, 00:20:21.476 "num_base_bdevs_discovered": 2, 00:20:21.476 "num_base_bdevs_operational": 2, 00:20:21.476 "process": { 00:20:21.476 "type": "rebuild", 00:20:21.476 "target": "spare", 00:20:21.476 "progress": { 00:20:21.476 "blocks": 2816, 00:20:21.476 "percent": 35 00:20:21.476 } 00:20:21.476 }, 00:20:21.476 "base_bdevs_list": [ 00:20:21.476 { 00:20:21.476 "name": "spare", 00:20:21.476 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:21.476 "is_configured": true, 00:20:21.476 "data_offset": 256, 00:20:21.476 "data_size": 7936 00:20:21.476 }, 00:20:21.476 { 00:20:21.476 "name": "BaseBdev2", 00:20:21.476 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:21.476 "is_configured": true, 00:20:21.476 "data_offset": 256, 00:20:21.476 "data_size": 7936 00:20:21.476 } 00:20:21.476 ] 00:20:21.476 }' 00:20:21.476 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.476 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.476 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.476 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.476 14:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.411 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.670 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.670 "name": "raid_bdev1", 00:20:22.670 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:22.670 "strip_size_kb": 0, 00:20:22.670 "state": "online", 00:20:22.670 "raid_level": "raid1", 00:20:22.671 "superblock": true, 00:20:22.671 "num_base_bdevs": 2, 00:20:22.671 "num_base_bdevs_discovered": 2, 00:20:22.671 "num_base_bdevs_operational": 2, 00:20:22.671 "process": { 00:20:22.671 "type": "rebuild", 00:20:22.671 "target": "spare", 00:20:22.671 "progress": { 00:20:22.671 "blocks": 5888, 00:20:22.671 "percent": 74 00:20:22.671 } 00:20:22.671 }, 00:20:22.671 "base_bdevs_list": [ 00:20:22.671 { 00:20:22.671 "name": "spare", 00:20:22.671 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:22.671 "is_configured": true, 00:20:22.671 "data_offset": 256, 00:20:22.671 "data_size": 7936 00:20:22.671 }, 00:20:22.671 { 00:20:22.671 "name": "BaseBdev2", 00:20:22.671 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:22.671 "is_configured": true, 00:20:22.671 "data_offset": 256, 00:20:22.671 "data_size": 7936 00:20:22.671 } 00:20:22.671 ] 00:20:22.671 }' 00:20:22.671 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.671 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.671 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.671 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.671 14:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:23.238 [2024-11-20 14:37:24.209664] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:23.238 [2024-11-20 14:37:24.209975] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:23.238 [2024-11-20 14:37:24.210159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.806 "name": "raid_bdev1", 00:20:23.806 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:23.806 "strip_size_kb": 0, 00:20:23.806 "state": "online", 00:20:23.806 "raid_level": "raid1", 00:20:23.806 "superblock": true, 00:20:23.806 "num_base_bdevs": 2, 00:20:23.806 "num_base_bdevs_discovered": 2, 00:20:23.806 "num_base_bdevs_operational": 2, 00:20:23.806 "base_bdevs_list": [ 00:20:23.806 { 00:20:23.806 "name": "spare", 00:20:23.806 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:23.806 "is_configured": true, 00:20:23.806 "data_offset": 256, 00:20:23.806 "data_size": 7936 00:20:23.806 }, 00:20:23.806 { 00:20:23.806 "name": "BaseBdev2", 00:20:23.806 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:23.806 "is_configured": true, 00:20:23.806 "data_offset": 256, 00:20:23.806 "data_size": 7936 00:20:23.806 } 00:20:23.806 ] 00:20:23.806 }' 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.806 "name": "raid_bdev1", 00:20:23.806 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:23.806 "strip_size_kb": 0, 00:20:23.806 "state": "online", 00:20:23.806 "raid_level": "raid1", 00:20:23.806 "superblock": true, 00:20:23.806 "num_base_bdevs": 2, 00:20:23.806 "num_base_bdevs_discovered": 2, 00:20:23.806 "num_base_bdevs_operational": 2, 00:20:23.806 "base_bdevs_list": [ 00:20:23.806 { 00:20:23.806 "name": "spare", 00:20:23.806 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:23.806 "is_configured": true, 00:20:23.806 "data_offset": 256, 00:20:23.806 "data_size": 7936 00:20:23.806 }, 00:20:23.806 { 00:20:23.806 "name": "BaseBdev2", 00:20:23.806 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:23.806 "is_configured": true, 00:20:23.806 "data_offset": 256, 00:20:23.806 "data_size": 7936 00:20:23.806 } 00:20:23.806 ] 00:20:23.806 }' 00:20:23.806 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.064 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.064 "name": "raid_bdev1", 00:20:24.064 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:24.064 "strip_size_kb": 0, 00:20:24.064 "state": "online", 00:20:24.064 "raid_level": "raid1", 00:20:24.064 "superblock": true, 00:20:24.064 "num_base_bdevs": 2, 00:20:24.064 "num_base_bdevs_discovered": 2, 00:20:24.064 "num_base_bdevs_operational": 2, 00:20:24.064 "base_bdevs_list": [ 00:20:24.064 { 00:20:24.064 "name": "spare", 00:20:24.064 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:24.064 "is_configured": true, 00:20:24.064 "data_offset": 256, 00:20:24.064 "data_size": 7936 00:20:24.064 }, 00:20:24.064 { 00:20:24.064 "name": "BaseBdev2", 00:20:24.064 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:24.064 "is_configured": true, 00:20:24.064 "data_offset": 256, 00:20:24.064 "data_size": 7936 00:20:24.064 } 00:20:24.064 ] 00:20:24.064 }' 00:20:24.065 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.065 14:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 [2024-11-20 14:37:25.457137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.632 [2024-11-20 14:37:25.457327] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.632 [2024-11-20 14:37:25.457459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.632 [2024-11-20 14:37:25.457551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.632 [2024-11-20 14:37:25.457567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 [2024-11-20 14:37:25.521162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.632 [2024-11-20 14:37:25.521378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.632 [2024-11-20 14:37:25.521422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:24.632 [2024-11-20 14:37:25.521438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.632 [2024-11-20 14:37:25.524264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.632 [2024-11-20 14:37:25.524306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.632 [2024-11-20 14:37:25.524396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:24.632 [2024-11-20 14:37:25.524453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.632 [2024-11-20 14:37:25.524588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.632 spare 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 [2024-11-20 14:37:25.624734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:24.632 [2024-11-20 14:37:25.624765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:24.632 [2024-11-20 14:37:25.624862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:24.632 [2024-11-20 14:37:25.624956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:24.632 [2024-11-20 14:37:25.624972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:24.632 [2024-11-20 14:37:25.625063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.632 "name": "raid_bdev1", 00:20:24.632 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:24.632 "strip_size_kb": 0, 00:20:24.632 "state": "online", 00:20:24.632 "raid_level": "raid1", 00:20:24.632 "superblock": true, 00:20:24.632 "num_base_bdevs": 2, 00:20:24.632 "num_base_bdevs_discovered": 2, 00:20:24.632 "num_base_bdevs_operational": 2, 00:20:24.632 "base_bdevs_list": [ 00:20:24.632 { 00:20:24.632 "name": "spare", 00:20:24.632 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:24.632 "is_configured": true, 00:20:24.632 "data_offset": 256, 00:20:24.632 "data_size": 7936 00:20:24.632 }, 00:20:24.632 { 00:20:24.632 "name": "BaseBdev2", 00:20:24.632 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:24.632 "is_configured": true, 00:20:24.632 "data_offset": 256, 00:20:24.632 "data_size": 7936 00:20:24.632 } 00:20:24.632 ] 00:20:24.632 }' 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.632 14:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.199 "name": "raid_bdev1", 00:20:25.199 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:25.199 "strip_size_kb": 0, 00:20:25.199 "state": "online", 00:20:25.199 "raid_level": "raid1", 00:20:25.199 "superblock": true, 00:20:25.199 "num_base_bdevs": 2, 00:20:25.199 "num_base_bdevs_discovered": 2, 00:20:25.199 "num_base_bdevs_operational": 2, 00:20:25.199 "base_bdevs_list": [ 00:20:25.199 { 00:20:25.199 "name": "spare", 00:20:25.199 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:25.199 "is_configured": true, 00:20:25.199 "data_offset": 256, 00:20:25.199 "data_size": 7936 00:20:25.199 }, 00:20:25.199 { 00:20:25.199 "name": "BaseBdev2", 00:20:25.199 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:25.199 "is_configured": true, 00:20:25.199 "data_offset": 256, 00:20:25.199 "data_size": 7936 00:20:25.199 } 00:20:25.199 ] 00:20:25.199 }' 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.199 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.457 [2024-11-20 14:37:26.349533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.457 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.458 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.458 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.458 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.458 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.458 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.458 "name": "raid_bdev1", 00:20:25.458 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:25.458 "strip_size_kb": 0, 00:20:25.458 "state": "online", 00:20:25.458 "raid_level": "raid1", 00:20:25.458 "superblock": true, 00:20:25.458 "num_base_bdevs": 2, 00:20:25.458 "num_base_bdevs_discovered": 1, 00:20:25.458 "num_base_bdevs_operational": 1, 00:20:25.458 "base_bdevs_list": [ 00:20:25.458 { 00:20:25.458 "name": null, 00:20:25.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.458 "is_configured": false, 00:20:25.458 "data_offset": 0, 00:20:25.458 "data_size": 7936 00:20:25.458 }, 00:20:25.458 { 00:20:25.458 "name": "BaseBdev2", 00:20:25.458 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:25.458 "is_configured": true, 00:20:25.458 "data_offset": 256, 00:20:25.458 "data_size": 7936 00:20:25.458 } 00:20:25.458 ] 00:20:25.458 }' 00:20:25.458 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.458 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.025 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:26.025 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.025 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.025 [2024-11-20 14:37:26.877743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.025 [2024-11-20 14:37:26.878045] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:26.025 [2024-11-20 14:37:26.878070] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:26.025 [2024-11-20 14:37:26.878139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.025 [2024-11-20 14:37:26.894683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:26.025 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.025 14:37:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:26.025 [2024-11-20 14:37:26.897291] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.960 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.960 "name": "raid_bdev1", 00:20:26.960 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:26.960 "strip_size_kb": 0, 00:20:26.960 "state": "online", 00:20:26.960 "raid_level": "raid1", 00:20:26.960 "superblock": true, 00:20:26.960 "num_base_bdevs": 2, 00:20:26.960 "num_base_bdevs_discovered": 2, 00:20:26.960 "num_base_bdevs_operational": 2, 00:20:26.960 "process": { 00:20:26.960 "type": "rebuild", 00:20:26.960 "target": "spare", 00:20:26.960 "progress": { 00:20:26.960 "blocks": 2560, 00:20:26.960 "percent": 32 00:20:26.960 } 00:20:26.960 }, 00:20:26.960 "base_bdevs_list": [ 00:20:26.960 { 00:20:26.960 "name": "spare", 00:20:26.960 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:26.960 "is_configured": true, 00:20:26.960 "data_offset": 256, 00:20:26.960 "data_size": 7936 00:20:26.960 }, 00:20:26.960 { 00:20:26.960 "name": "BaseBdev2", 00:20:26.961 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:26.961 "is_configured": true, 00:20:26.961 "data_offset": 256, 00:20:26.961 "data_size": 7936 00:20:26.961 } 00:20:26.961 ] 00:20:26.961 }' 00:20:26.961 14:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.961 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.961 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.219 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.219 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:27.219 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.219 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.219 [2024-11-20 14:37:28.058289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:27.219 [2024-11-20 14:37:28.106001] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:27.219 [2024-11-20 14:37:28.106322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.219 [2024-11-20 14:37:28.106530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:27.219 [2024-11-20 14:37:28.106576] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:27.219 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.219 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.220 "name": "raid_bdev1", 00:20:27.220 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:27.220 "strip_size_kb": 0, 00:20:27.220 "state": "online", 00:20:27.220 "raid_level": "raid1", 00:20:27.220 "superblock": true, 00:20:27.220 "num_base_bdevs": 2, 00:20:27.220 "num_base_bdevs_discovered": 1, 00:20:27.220 "num_base_bdevs_operational": 1, 00:20:27.220 "base_bdevs_list": [ 00:20:27.220 { 00:20:27.220 "name": null, 00:20:27.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.220 "is_configured": false, 00:20:27.220 "data_offset": 0, 00:20:27.220 "data_size": 7936 00:20:27.220 }, 00:20:27.220 { 00:20:27.220 "name": "BaseBdev2", 00:20:27.220 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:27.220 "is_configured": true, 00:20:27.220 "data_offset": 256, 00:20:27.220 "data_size": 7936 00:20:27.220 } 00:20:27.220 ] 00:20:27.220 }' 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.220 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.787 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.787 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.787 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.787 [2024-11-20 14:37:28.659275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.787 [2024-11-20 14:37:28.659382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.787 [2024-11-20 14:37:28.659422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:27.787 [2024-11-20 14:37:28.659441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.787 [2024-11-20 14:37:28.659748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.787 [2024-11-20 14:37:28.659778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.787 [2024-11-20 14:37:28.659859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:27.787 [2024-11-20 14:37:28.659883] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:27.787 [2024-11-20 14:37:28.659897] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:27.787 [2024-11-20 14:37:28.659947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.787 [2024-11-20 14:37:28.676601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:27.787 spare 00:20:27.787 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.787 14:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:27.787 [2024-11-20 14:37:28.679261] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.722 "name": "raid_bdev1", 00:20:28.722 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:28.722 "strip_size_kb": 0, 00:20:28.722 "state": "online", 00:20:28.722 "raid_level": "raid1", 00:20:28.722 "superblock": true, 00:20:28.722 "num_base_bdevs": 2, 00:20:28.722 "num_base_bdevs_discovered": 2, 00:20:28.722 "num_base_bdevs_operational": 2, 00:20:28.722 "process": { 00:20:28.722 "type": "rebuild", 00:20:28.722 "target": "spare", 00:20:28.722 "progress": { 00:20:28.722 "blocks": 2560, 00:20:28.722 "percent": 32 00:20:28.722 } 00:20:28.722 }, 00:20:28.722 "base_bdevs_list": [ 00:20:28.722 { 00:20:28.722 "name": "spare", 00:20:28.722 "uuid": "2e398fad-7684-5b1a-955b-37c94706231b", 00:20:28.722 "is_configured": true, 00:20:28.722 "data_offset": 256, 00:20:28.722 "data_size": 7936 00:20:28.722 }, 00:20:28.722 { 00:20:28.722 "name": "BaseBdev2", 00:20:28.722 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:28.722 "is_configured": true, 00:20:28.722 "data_offset": 256, 00:20:28.722 "data_size": 7936 00:20:28.722 } 00:20:28.722 ] 00:20:28.722 }' 00:20:28.722 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.980 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.980 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.980 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.980 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.981 [2024-11-20 14:37:29.852501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.981 [2024-11-20 14:37:29.887571] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:28.981 [2024-11-20 14:37:29.887688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.981 [2024-11-20 14:37:29.887728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.981 [2024-11-20 14:37:29.887740] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.981 "name": "raid_bdev1", 00:20:28.981 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:28.981 "strip_size_kb": 0, 00:20:28.981 "state": "online", 00:20:28.981 "raid_level": "raid1", 00:20:28.981 "superblock": true, 00:20:28.981 "num_base_bdevs": 2, 00:20:28.981 "num_base_bdevs_discovered": 1, 00:20:28.981 "num_base_bdevs_operational": 1, 00:20:28.981 "base_bdevs_list": [ 00:20:28.981 { 00:20:28.981 "name": null, 00:20:28.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.981 "is_configured": false, 00:20:28.981 "data_offset": 0, 00:20:28.981 "data_size": 7936 00:20:28.981 }, 00:20:28.981 { 00:20:28.981 "name": "BaseBdev2", 00:20:28.981 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:28.981 "is_configured": true, 00:20:28.981 "data_offset": 256, 00:20:28.981 "data_size": 7936 00:20:28.981 } 00:20:28.981 ] 00:20:28.981 }' 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.981 14:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.546 "name": "raid_bdev1", 00:20:29.546 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:29.546 "strip_size_kb": 0, 00:20:29.546 "state": "online", 00:20:29.546 "raid_level": "raid1", 00:20:29.546 "superblock": true, 00:20:29.546 "num_base_bdevs": 2, 00:20:29.546 "num_base_bdevs_discovered": 1, 00:20:29.546 "num_base_bdevs_operational": 1, 00:20:29.546 "base_bdevs_list": [ 00:20:29.546 { 00:20:29.546 "name": null, 00:20:29.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.546 "is_configured": false, 00:20:29.546 "data_offset": 0, 00:20:29.546 "data_size": 7936 00:20:29.546 }, 00:20:29.546 { 00:20:29.546 "name": "BaseBdev2", 00:20:29.546 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:29.546 "is_configured": true, 00:20:29.546 "data_offset": 256, 00:20:29.546 "data_size": 7936 00:20:29.546 } 00:20:29.546 ] 00:20:29.546 }' 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.546 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:29.805 [2024-11-20 14:37:30.601846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:29.805 [2024-11-20 14:37:30.601924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.805 [2024-11-20 14:37:30.601960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:29.805 [2024-11-20 14:37:30.601976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.805 [2024-11-20 14:37:30.602251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.805 [2024-11-20 14:37:30.602277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:29.805 [2024-11-20 14:37:30.602346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:29.805 [2024-11-20 14:37:30.602367] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:29.805 [2024-11-20 14:37:30.602382] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:29.805 [2024-11-20 14:37:30.602395] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:29.805 BaseBdev1 00:20:29.805 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.805 14:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.741 "name": "raid_bdev1", 00:20:30.741 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:30.741 "strip_size_kb": 0, 00:20:30.741 "state": "online", 00:20:30.741 "raid_level": "raid1", 00:20:30.741 "superblock": true, 00:20:30.741 "num_base_bdevs": 2, 00:20:30.741 "num_base_bdevs_discovered": 1, 00:20:30.741 "num_base_bdevs_operational": 1, 00:20:30.741 "base_bdevs_list": [ 00:20:30.741 { 00:20:30.741 "name": null, 00:20:30.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.741 "is_configured": false, 00:20:30.741 "data_offset": 0, 00:20:30.741 "data_size": 7936 00:20:30.741 }, 00:20:30.741 { 00:20:30.741 "name": "BaseBdev2", 00:20:30.741 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:30.741 "is_configured": true, 00:20:30.741 "data_offset": 256, 00:20:30.741 "data_size": 7936 00:20:30.741 } 00:20:30.741 ] 00:20:30.741 }' 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.741 14:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.308 "name": "raid_bdev1", 00:20:31.308 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:31.308 "strip_size_kb": 0, 00:20:31.308 "state": "online", 00:20:31.308 "raid_level": "raid1", 00:20:31.308 "superblock": true, 00:20:31.308 "num_base_bdevs": 2, 00:20:31.308 "num_base_bdevs_discovered": 1, 00:20:31.308 "num_base_bdevs_operational": 1, 00:20:31.308 "base_bdevs_list": [ 00:20:31.308 { 00:20:31.308 "name": null, 00:20:31.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.308 "is_configured": false, 00:20:31.308 "data_offset": 0, 00:20:31.308 "data_size": 7936 00:20:31.308 }, 00:20:31.308 { 00:20:31.308 "name": "BaseBdev2", 00:20:31.308 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:31.308 "is_configured": true, 00:20:31.308 "data_offset": 256, 00:20:31.308 "data_size": 7936 00:20:31.308 } 00:20:31.308 ] 00:20:31.308 }' 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:31.308 [2024-11-20 14:37:32.290451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.308 [2024-11-20 14:37:32.290715] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:31.308 [2024-11-20 14:37:32.290745] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:31.308 request: 00:20:31.308 { 00:20:31.308 "base_bdev": "BaseBdev1", 00:20:31.308 "raid_bdev": "raid_bdev1", 00:20:31.308 "method": "bdev_raid_add_base_bdev", 00:20:31.308 "req_id": 1 00:20:31.308 } 00:20:31.308 Got JSON-RPC error response 00:20:31.308 response: 00:20:31.308 { 00:20:31.308 "code": -22, 00:20:31.308 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:31.308 } 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.308 14:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.686 "name": "raid_bdev1", 00:20:32.686 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:32.686 "strip_size_kb": 0, 00:20:32.686 "state": "online", 00:20:32.686 "raid_level": "raid1", 00:20:32.686 "superblock": true, 00:20:32.686 "num_base_bdevs": 2, 00:20:32.686 "num_base_bdevs_discovered": 1, 00:20:32.686 "num_base_bdevs_operational": 1, 00:20:32.686 "base_bdevs_list": [ 00:20:32.686 { 00:20:32.686 "name": null, 00:20:32.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.686 "is_configured": false, 00:20:32.686 "data_offset": 0, 00:20:32.686 "data_size": 7936 00:20:32.686 }, 00:20:32.686 { 00:20:32.686 "name": "BaseBdev2", 00:20:32.686 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:32.686 "is_configured": true, 00:20:32.686 "data_offset": 256, 00:20:32.686 "data_size": 7936 00:20:32.686 } 00:20:32.686 ] 00:20:32.686 }' 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.686 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.945 "name": "raid_bdev1", 00:20:32.945 "uuid": "c78145be-1240-4c67-8e80-e762b40dd1b5", 00:20:32.945 "strip_size_kb": 0, 00:20:32.945 "state": "online", 00:20:32.945 "raid_level": "raid1", 00:20:32.945 "superblock": true, 00:20:32.945 "num_base_bdevs": 2, 00:20:32.945 "num_base_bdevs_discovered": 1, 00:20:32.945 "num_base_bdevs_operational": 1, 00:20:32.945 "base_bdevs_list": [ 00:20:32.945 { 00:20:32.945 "name": null, 00:20:32.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.945 "is_configured": false, 00:20:32.945 "data_offset": 0, 00:20:32.945 "data_size": 7936 00:20:32.945 }, 00:20:32.945 { 00:20:32.945 "name": "BaseBdev2", 00:20:32.945 "uuid": "79c74234-89f0-5abb-b8f4-e6765b2ff92f", 00:20:32.945 "is_configured": true, 00:20:32.945 "data_offset": 256, 00:20:32.945 "data_size": 7936 00:20:32.945 } 00:20:32.945 ] 00:20:32.945 }' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89592 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89592 ']' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89592 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89592 00:20:32.945 killing process with pid 89592 00:20:32.945 Received shutdown signal, test time was about 60.000000 seconds 00:20:32.945 00:20:32.945 Latency(us) 00:20:32.945 [2024-11-20T14:37:34.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.945 [2024-11-20T14:37:34.002Z] =================================================================================================================== 00:20:32.945 [2024-11-20T14:37:34.002Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89592' 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89592 00:20:32.945 [2024-11-20 14:37:33.983137] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.945 14:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89592 00:20:32.945 [2024-11-20 14:37:33.983293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.945 [2024-11-20 14:37:33.983357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.945 [2024-11-20 14:37:33.983407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:33.204 [2024-11-20 14:37:34.245037] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:34.581 14:37:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:34.581 00:20:34.581 real 0m18.580s 00:20:34.581 user 0m25.386s 00:20:34.581 sys 0m1.427s 00:20:34.581 14:37:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.581 14:37:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:34.581 ************************************ 00:20:34.581 END TEST raid_rebuild_test_sb_md_interleaved 00:20:34.581 ************************************ 00:20:34.581 14:37:35 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:34.581 14:37:35 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:34.581 14:37:35 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89592 ']' 00:20:34.581 14:37:35 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89592 00:20:34.581 14:37:35 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:34.581 00:20:34.581 real 13m6.740s 00:20:34.581 user 18m26.548s 00:20:34.581 sys 1m48.275s 00:20:34.581 14:37:35 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.581 ************************************ 00:20:34.581 END TEST bdev_raid 00:20:34.581 ************************************ 00:20:34.581 14:37:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.581 14:37:35 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:34.581 14:37:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:34.581 14:37:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.581 14:37:35 -- common/autotest_common.sh@10 -- # set +x 00:20:34.581 ************************************ 00:20:34.581 START TEST spdkcli_raid 00:20:34.581 ************************************ 00:20:34.581 14:37:35 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:34.581 * Looking for test storage... 00:20:34.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:34.581 14:37:35 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:34.581 14:37:35 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:34.581 14:37:35 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.582 14:37:35 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.582 --rc genhtml_branch_coverage=1 00:20:34.582 --rc genhtml_function_coverage=1 00:20:34.582 --rc genhtml_legend=1 00:20:34.582 --rc geninfo_all_blocks=1 00:20:34.582 --rc geninfo_unexecuted_blocks=1 00:20:34.582 00:20:34.582 ' 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.582 --rc genhtml_branch_coverage=1 00:20:34.582 --rc genhtml_function_coverage=1 00:20:34.582 --rc genhtml_legend=1 00:20:34.582 --rc geninfo_all_blocks=1 00:20:34.582 --rc geninfo_unexecuted_blocks=1 00:20:34.582 00:20:34.582 ' 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.582 --rc genhtml_branch_coverage=1 00:20:34.582 --rc genhtml_function_coverage=1 00:20:34.582 --rc genhtml_legend=1 00:20:34.582 --rc geninfo_all_blocks=1 00:20:34.582 --rc geninfo_unexecuted_blocks=1 00:20:34.582 00:20:34.582 ' 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:34.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.582 --rc genhtml_branch_coverage=1 00:20:34.582 --rc genhtml_function_coverage=1 00:20:34.582 --rc genhtml_legend=1 00:20:34.582 --rc geninfo_all_blocks=1 00:20:34.582 --rc geninfo_unexecuted_blocks=1 00:20:34.582 00:20:34.582 ' 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:34.582 14:37:35 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90273 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:34.582 14:37:35 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90273 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90273 ']' 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.582 14:37:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.841 [2024-11-20 14:37:35.756613] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:34.841 [2024-11-20 14:37:35.756831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90273 ] 00:20:35.100 [2024-11-20 14:37:35.946495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:35.100 [2024-11-20 14:37:36.085709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.100 [2024-11-20 14:37:36.085719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.036 14:37:36 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.036 14:37:36 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:36.036 14:37:36 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:36.036 14:37:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.036 14:37:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.036 14:37:37 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:36.036 14:37:37 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.036 14:37:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.036 14:37:37 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:36.036 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:36.036 ' 00:20:37.937 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:37.937 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:37.937 14:37:38 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:37.937 14:37:38 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.937 14:37:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.937 14:37:38 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:37.937 14:37:38 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.937 14:37:38 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.937 14:37:38 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:37.937 ' 00:20:38.871 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:38.871 14:37:39 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:38.871 14:37:39 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.871 14:37:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.139 14:37:39 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:39.139 14:37:39 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.139 14:37:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.139 14:37:39 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:39.139 14:37:39 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:39.705 14:37:40 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:39.705 14:37:40 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:39.705 14:37:40 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:39.705 14:37:40 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.705 14:37:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.705 14:37:40 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:39.705 14:37:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.705 14:37:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.705 14:37:40 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:39.705 ' 00:20:40.640 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:40.898 14:37:41 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:40.898 14:37:41 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.898 14:37:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.898 14:37:41 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:40.898 14:37:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.898 14:37:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:40.898 14:37:41 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:40.898 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:40.898 ' 00:20:42.272 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:42.272 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:42.272 14:37:43 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:42.272 14:37:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.272 14:37:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.272 14:37:43 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90273 00:20:42.272 14:37:43 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90273 ']' 00:20:42.272 14:37:43 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90273 00:20:42.272 14:37:43 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:42.272 14:37:43 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.272 14:37:43 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90273 00:20:42.529 14:37:43 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.529 14:37:43 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.529 14:37:43 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90273' 00:20:42.529 killing process with pid 90273 00:20:42.529 14:37:43 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90273 00:20:42.529 14:37:43 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90273 00:20:45.063 14:37:45 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:45.063 14:37:45 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90273 ']' 00:20:45.063 14:37:45 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90273 00:20:45.063 Process with pid 90273 is not found 00:20:45.063 14:37:45 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90273 ']' 00:20:45.063 14:37:45 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90273 00:20:45.063 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90273) - No such process 00:20:45.063 14:37:45 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90273 is not found' 00:20:45.063 14:37:45 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:45.063 14:37:45 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:45.063 14:37:45 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:45.063 14:37:45 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:45.063 ************************************ 00:20:45.063 END TEST spdkcli_raid 00:20:45.063 ************************************ 00:20:45.063 00:20:45.063 real 0m10.097s 00:20:45.063 user 0m20.828s 00:20:45.063 sys 0m1.224s 00:20:45.063 14:37:45 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.063 14:37:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.063 14:37:45 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:45.063 14:37:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:45.063 14:37:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.063 14:37:45 -- common/autotest_common.sh@10 -- # set +x 00:20:45.063 ************************************ 00:20:45.063 START TEST blockdev_raid5f 00:20:45.063 ************************************ 00:20:45.063 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:45.063 * Looking for test storage... 00:20:45.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:45.063 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:45.063 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:20:45.063 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:45.063 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.063 14:37:45 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:45.063 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.063 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:45.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.063 --rc genhtml_branch_coverage=1 00:20:45.063 --rc genhtml_function_coverage=1 00:20:45.063 --rc genhtml_legend=1 00:20:45.063 --rc geninfo_all_blocks=1 00:20:45.064 --rc geninfo_unexecuted_blocks=1 00:20:45.064 00:20:45.064 ' 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.064 --rc genhtml_branch_coverage=1 00:20:45.064 --rc genhtml_function_coverage=1 00:20:45.064 --rc genhtml_legend=1 00:20:45.064 --rc geninfo_all_blocks=1 00:20:45.064 --rc geninfo_unexecuted_blocks=1 00:20:45.064 00:20:45.064 ' 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.064 --rc genhtml_branch_coverage=1 00:20:45.064 --rc genhtml_function_coverage=1 00:20:45.064 --rc genhtml_legend=1 00:20:45.064 --rc geninfo_all_blocks=1 00:20:45.064 --rc geninfo_unexecuted_blocks=1 00:20:45.064 00:20:45.064 ' 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:45.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.064 --rc genhtml_branch_coverage=1 00:20:45.064 --rc genhtml_function_coverage=1 00:20:45.064 --rc genhtml_legend=1 00:20:45.064 --rc geninfo_all_blocks=1 00:20:45.064 --rc geninfo_unexecuted_blocks=1 00:20:45.064 00:20:45.064 ' 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90549 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:45.064 14:37:45 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90549 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90549 ']' 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.064 14:37:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:45.064 [2024-11-20 14:37:45.918009] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:45.064 [2024-11-20 14:37:45.919230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90549 ] 00:20:45.064 [2024-11-20 14:37:46.112565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.323 [2024-11-20 14:37:46.263243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 Malloc0 00:20:46.258 Malloc1 00:20:46.258 Malloc2 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.258 14:37:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.258 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8d818fd6-48df-4ac3-b463-58ced8c98f7b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8d818fd6-48df-4ac3-b463-58ced8c98f7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8d818fd6-48df-4ac3-b463-58ced8c98f7b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d83e5d1f-adad-4fd6-850a-c97635ee346a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c926ec10-dd3f-4508-b410-b1850bab3c86",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "77f7d94a-cff6-43a0-826d-788a0a9644c2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:46.516 14:37:47 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90549 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90549 ']' 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90549 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.516 14:37:47 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90549 00:20:46.516 killing process with pid 90549 00:20:46.517 14:37:47 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.517 14:37:47 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.517 14:37:47 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90549' 00:20:46.517 14:37:47 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90549 00:20:46.517 14:37:47 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90549 00:20:49.055 14:37:49 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:49.055 14:37:49 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:49.055 14:37:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:49.055 14:37:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:49.055 14:37:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:49.055 ************************************ 00:20:49.055 START TEST bdev_hello_world 00:20:49.055 ************************************ 00:20:49.055 14:37:49 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:49.055 [2024-11-20 14:37:49.935423] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:49.055 [2024-11-20 14:37:49.935964] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90611 ] 00:20:49.314 [2024-11-20 14:37:50.128895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.314 [2024-11-20 14:37:50.258261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.880 [2024-11-20 14:37:50.779838] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:49.880 [2024-11-20 14:37:50.780397] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:49.880 [2024-11-20 14:37:50.780506] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:49.880 [2024-11-20 14:37:50.781325] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:49.880 [2024-11-20 14:37:50.781607] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:49.880 [2024-11-20 14:37:50.781735] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:49.880 [2024-11-20 14:37:50.781887] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:49.880 00:20:49.880 [2024-11-20 14:37:50.782015] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:51.254 00:20:51.254 real 0m2.197s 00:20:51.254 user 0m1.752s 00:20:51.254 sys 0m0.321s 00:20:51.254 14:37:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.254 14:37:52 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:51.254 ************************************ 00:20:51.254 END TEST bdev_hello_world 00:20:51.254 ************************************ 00:20:51.254 14:37:52 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:51.254 14:37:52 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.254 14:37:52 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.254 14:37:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:51.254 ************************************ 00:20:51.254 START TEST bdev_bounds 00:20:51.254 ************************************ 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90655 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90655' 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:51.254 Process bdevio pid: 90655 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90655 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90655 ']' 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.254 14:37:52 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:51.254 [2024-11-20 14:37:52.185420] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:51.254 [2024-11-20 14:37:52.185662] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90655 ] 00:20:51.513 [2024-11-20 14:37:52.371424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:51.513 [2024-11-20 14:37:52.506866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.513 [2024-11-20 14:37:52.507003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.513 [2024-11-20 14:37:52.507009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.080 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.080 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:52.080 14:37:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:52.338 I/O targets: 00:20:52.338 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:52.338 00:20:52.338 00:20:52.338 CUnit - A unit testing framework for C - Version 2.1-3 00:20:52.338 http://cunit.sourceforge.net/ 00:20:52.338 00:20:52.338 00:20:52.338 Suite: bdevio tests on: raid5f 00:20:52.338 Test: blockdev write read block ...passed 00:20:52.338 Test: blockdev write zeroes read block ...passed 00:20:52.338 Test: blockdev write zeroes read no split ...passed 00:20:52.338 Test: blockdev write zeroes read split ...passed 00:20:52.597 Test: blockdev write zeroes read split partial ...passed 00:20:52.597 Test: blockdev reset ...passed 00:20:52.597 Test: blockdev write read 8 blocks ...passed 00:20:52.597 Test: blockdev write read size > 128k ...passed 00:20:52.597 Test: blockdev write read invalid size ...passed 00:20:52.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:52.597 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:52.597 Test: blockdev write read max offset ...passed 00:20:52.597 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:52.597 Test: blockdev writev readv 8 blocks ...passed 00:20:52.597 Test: blockdev writev readv 30 x 1block ...passed 00:20:52.597 Test: blockdev writev readv block ...passed 00:20:52.597 Test: blockdev writev readv size > 128k ...passed 00:20:52.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:52.597 Test: blockdev comparev and writev ...passed 00:20:52.597 Test: blockdev nvme passthru rw ...passed 00:20:52.597 Test: blockdev nvme passthru vendor specific ...passed 00:20:52.597 Test: blockdev nvme admin passthru ...passed 00:20:52.597 Test: blockdev copy ...passed 00:20:52.597 00:20:52.597 Run Summary: Type Total Ran Passed Failed Inactive 00:20:52.597 suites 1 1 n/a 0 0 00:20:52.597 tests 23 23 23 0 0 00:20:52.597 asserts 130 130 130 0 n/a 00:20:52.597 00:20:52.597 Elapsed time = 0.522 seconds 00:20:52.597 0 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90655 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90655 ']' 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90655 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90655 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.597 killing process with pid 90655 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90655' 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90655 00:20:52.597 14:37:53 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90655 00:20:53.973 14:37:54 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:53.973 00:20:53.973 real 0m2.761s 00:20:53.973 user 0m6.795s 00:20:53.973 sys 0m0.483s 00:20:53.973 ************************************ 00:20:53.973 END TEST bdev_bounds 00:20:53.973 ************************************ 00:20:53.973 14:37:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.973 14:37:54 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:53.973 14:37:54 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:53.973 14:37:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:53.973 14:37:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.973 14:37:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:53.973 ************************************ 00:20:53.973 START TEST bdev_nbd 00:20:53.973 ************************************ 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90715 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90715 /var/tmp/spdk-nbd.sock 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90715 ']' 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.973 14:37:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:53.973 [2024-11-20 14:37:55.007990] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:20:53.973 [2024-11-20 14:37:55.008198] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.231 [2024-11-20 14:37:55.191751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.489 [2024-11-20 14:37:55.326949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:55.055 14:37:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:55.314 1+0 records in 00:20:55.314 1+0 records out 00:20:55.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317867 s, 12.9 MB/s 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:55.314 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:55.572 { 00:20:55.572 "nbd_device": "/dev/nbd0", 00:20:55.572 "bdev_name": "raid5f" 00:20:55.572 } 00:20:55.572 ]' 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:55.572 { 00:20:55.572 "nbd_device": "/dev/nbd0", 00:20:55.572 "bdev_name": "raid5f" 00:20:55.572 } 00:20:55.572 ]' 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.572 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.138 14:37:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:56.138 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:56.397 /dev/nbd0 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:56.397 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.656 1+0 records in 00:20:56.656 1+0 records out 00:20:56.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335348 s, 12.2 MB/s 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.656 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:56.914 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:56.914 { 00:20:56.914 "nbd_device": "/dev/nbd0", 00:20:56.914 "bdev_name": "raid5f" 00:20:56.914 } 00:20:56.915 ]' 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:56.915 { 00:20:56.915 "nbd_device": "/dev/nbd0", 00:20:56.915 "bdev_name": "raid5f" 00:20:56.915 } 00:20:56.915 ]' 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:56.915 256+0 records in 00:20:56.915 256+0 records out 00:20:56.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671608 s, 156 MB/s 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:56.915 256+0 records in 00:20:56.915 256+0 records out 00:20:56.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383113 s, 27.4 MB/s 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:56.915 14:37:57 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:57.174 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:57.431 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:57.431 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:57.431 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:57.690 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:57.948 malloc_lvol_verify 00:20:57.948 14:37:58 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:58.206 683042a8-5c20-4ac1-831f-11bef5343e0d 00:20:58.206 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:58.464 3433969f-ef16-40cd-bd23-b2fc7d69fc8e 00:20:58.464 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:58.723 /dev/nbd0 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:58.723 mke2fs 1.47.0 (5-Feb-2023) 00:20:58.723 Discarding device blocks: 0/4096 done 00:20:58.723 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:58.723 00:20:58.723 Allocating group tables: 0/1 done 00:20:58.723 Writing inode tables: 0/1 done 00:20:58.723 Creating journal (1024 blocks): done 00:20:58.723 Writing superblocks and filesystem accounting information: 0/1 done 00:20:58.723 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:58.723 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90715 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90715 ']' 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90715 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90715 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:58.981 killing process with pid 90715 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90715' 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90715 00:20:58.981 14:37:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90715 00:21:00.352 14:38:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:00.352 00:21:00.352 real 0m6.415s 00:21:00.352 user 0m9.157s 00:21:00.352 sys 0m1.407s 00:21:00.352 14:38:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.352 ************************************ 00:21:00.352 END TEST bdev_nbd 00:21:00.352 ************************************ 00:21:00.352 14:38:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:00.352 14:38:01 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:00.352 14:38:01 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:21:00.352 14:38:01 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:21:00.352 14:38:01 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:00.352 14:38:01 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.352 14:38:01 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.352 14:38:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:00.352 ************************************ 00:21:00.352 START TEST bdev_fio 00:21:00.352 ************************************ 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:00.352 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:00.352 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:00.610 ************************************ 00:21:00.610 START TEST bdev_fio_rw_verify 00:21:00.610 ************************************ 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:00.610 14:38:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:00.868 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:00.868 fio-3.35 00:21:00.868 Starting 1 thread 00:21:13.064 00:21:13.064 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90922: Wed Nov 20 14:38:12 2024 00:21:13.064 read: IOPS=8897, BW=34.8MiB/s (36.4MB/s)(348MiB/10001msec) 00:21:13.064 slat (usec): min=22, max=108, avg=27.51, stdev= 4.07 00:21:13.064 clat (usec): min=13, max=803, avg=179.67, stdev=65.69 00:21:13.064 lat (usec): min=39, max=832, avg=207.18, stdev=66.31 00:21:13.064 clat percentiles (usec): 00:21:13.064 | 50.000th=[ 180], 99.000th=[ 302], 99.900th=[ 363], 99.990th=[ 494], 00:21:13.064 | 99.999th=[ 807] 00:21:13.064 write: IOPS=9356, BW=36.5MiB/s (38.3MB/s)(361MiB/9875msec); 0 zone resets 00:21:13.064 slat (usec): min=11, max=590, avg=22.69, stdev= 5.33 00:21:13.064 clat (usec): min=82, max=1216, avg=408.48, stdev=52.96 00:21:13.064 lat (usec): min=103, max=1427, avg=431.17, stdev=54.22 00:21:13.064 clat percentiles (usec): 00:21:13.064 | 50.000th=[ 416], 99.000th=[ 529], 99.900th=[ 627], 99.990th=[ 1029], 00:21:13.064 | 99.999th=[ 1221] 00:21:13.064 bw ( KiB/s): min=33744, max=40088, per=98.96%, avg=37034.95, stdev=1868.60, samples=19 00:21:13.064 iops : min= 8436, max=10022, avg=9258.74, stdev=467.15, samples=19 00:21:13.064 lat (usec) : 20=0.01%, 100=6.35%, 250=33.95%, 500=58.69%, 750=0.98% 00:21:13.064 lat (usec) : 1000=0.02% 00:21:13.064 lat (msec) : 2=0.01% 00:21:13.064 cpu : usr=98.66%, sys=0.50%, ctx=29, majf=0, minf=7680 00:21:13.064 IO depths : 1=7.6%, 2=19.7%, 4=55.3%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:13.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.064 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:13.064 issued rwts: total=88979,92392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:13.064 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:13.064 00:21:13.064 Run status group 0 (all jobs): 00:21:13.064 READ: bw=34.8MiB/s (36.4MB/s), 34.8MiB/s-34.8MiB/s (36.4MB/s-36.4MB/s), io=348MiB (364MB), run=10001-10001msec 00:21:13.064 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=361MiB (378MB), run=9875-9875msec 00:21:13.322 ----------------------------------------------------- 00:21:13.322 Suppressions used: 00:21:13.322 count bytes template 00:21:13.322 1 7 /usr/src/fio/parse.c 00:21:13.322 641 61536 /usr/src/fio/iolog.c 00:21:13.322 1 8 libtcmalloc_minimal.so 00:21:13.322 1 904 libcrypto.so 00:21:13.322 ----------------------------------------------------- 00:21:13.322 00:21:13.322 00:21:13.322 real 0m12.804s 00:21:13.322 user 0m13.029s 00:21:13.322 sys 0m0.640s 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:13.322 ************************************ 00:21:13.322 END TEST bdev_fio_rw_verify 00:21:13.322 ************************************ 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:13.322 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "8d818fd6-48df-4ac3-b463-58ced8c98f7b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "8d818fd6-48df-4ac3-b463-58ced8c98f7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "8d818fd6-48df-4ac3-b463-58ced8c98f7b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "d83e5d1f-adad-4fd6-850a-c97635ee346a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c926ec10-dd3f-4508-b410-b1850bab3c86",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "77f7d94a-cff6-43a0-826d-788a0a9644c2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:13.579 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:13.579 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:13.579 /home/vagrant/spdk_repo/spdk 00:21:13.579 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:13.579 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:13.579 14:38:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:13.579 00:21:13.579 real 0m13.034s 00:21:13.579 user 0m13.140s 00:21:13.579 sys 0m0.734s 00:21:13.579 ************************************ 00:21:13.579 END TEST bdev_fio 00:21:13.579 ************************************ 00:21:13.579 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.579 14:38:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:13.579 14:38:14 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:13.579 14:38:14 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:13.579 14:38:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:13.579 14:38:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.579 14:38:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:13.579 ************************************ 00:21:13.579 START TEST bdev_verify 00:21:13.579 ************************************ 00:21:13.579 14:38:14 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:13.579 [2024-11-20 14:38:14.551100] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:21:13.580 [2024-11-20 14:38:14.551495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91082 ] 00:21:13.837 [2024-11-20 14:38:14.737326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:13.837 [2024-11-20 14:38:14.866830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.837 [2024-11-20 14:38:14.866843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.401 Running I/O for 5 seconds... 00:21:16.723 11070.00 IOPS, 43.24 MiB/s [2024-11-20T14:38:18.714Z] 11915.50 IOPS, 46.54 MiB/s [2024-11-20T14:38:19.649Z] 12652.00 IOPS, 49.42 MiB/s [2024-11-20T14:38:20.584Z] 13105.00 IOPS, 51.19 MiB/s [2024-11-20T14:38:20.584Z] 13245.80 IOPS, 51.74 MiB/s 00:21:19.527 Latency(us) 00:21:19.527 [2024-11-20T14:38:20.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.527 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.527 Verification LBA range: start 0x0 length 0x2000 00:21:19.527 raid5f : 5.02 6601.15 25.79 0.00 0.00 29226.95 288.58 24903.68 00:21:19.527 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:19.527 Verification LBA range: start 0x2000 length 0x2000 00:21:19.527 raid5f : 5.01 6635.94 25.92 0.00 0.00 28963.13 275.55 25022.84 00:21:19.527 [2024-11-20T14:38:20.584Z] =================================================================================================================== 00:21:19.527 [2024-11-20T14:38:20.584Z] Total : 13237.09 51.71 0.00 0.00 29094.82 275.55 25022.84 00:21:20.901 ************************************ 00:21:20.901 END TEST bdev_verify 00:21:20.901 ************************************ 00:21:20.901 00:21:20.901 real 0m7.310s 00:21:20.901 user 0m13.390s 00:21:20.901 sys 0m0.329s 00:21:20.901 14:38:21 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.901 14:38:21 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:20.901 14:38:21 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:20.901 14:38:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:20.901 14:38:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.901 14:38:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:20.901 ************************************ 00:21:20.901 START TEST bdev_verify_big_io 00:21:20.901 ************************************ 00:21:20.901 14:38:21 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:20.901 [2024-11-20 14:38:21.915551] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:21:20.901 [2024-11-20 14:38:21.915755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91175 ] 00:21:21.158 [2024-11-20 14:38:22.101745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:21.416 [2024-11-20 14:38:22.234647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.416 [2024-11-20 14:38:22.234656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.981 Running I/O for 5 seconds... 00:21:23.851 506.00 IOPS, 31.62 MiB/s [2024-11-20T14:38:25.843Z] 634.00 IOPS, 39.62 MiB/s [2024-11-20T14:38:27.216Z] 676.67 IOPS, 42.29 MiB/s [2024-11-20T14:38:28.150Z] 697.50 IOPS, 43.59 MiB/s [2024-11-20T14:38:28.409Z] 710.40 IOPS, 44.40 MiB/s 00:21:27.352 Latency(us) 00:21:27.352 [2024-11-20T14:38:28.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.352 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:27.352 Verification LBA range: start 0x0 length 0x200 00:21:27.352 raid5f : 5.36 355.08 22.19 0.00 0.00 8935279.16 185.25 392739.37 00:21:27.352 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:27.352 Verification LBA range: start 0x200 length 0x200 00:21:27.352 raid5f : 5.36 355.36 22.21 0.00 0.00 8923609.56 316.51 392739.37 00:21:27.352 [2024-11-20T14:38:28.409Z] =================================================================================================================== 00:21:27.352 [2024-11-20T14:38:28.409Z] Total : 710.44 44.40 0.00 0.00 8929444.36 185.25 392739.37 00:21:28.740 00:21:28.740 real 0m7.690s 00:21:28.740 user 0m14.128s 00:21:28.740 sys 0m0.344s 00:21:28.740 14:38:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.740 14:38:29 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:28.741 ************************************ 00:21:28.741 END TEST bdev_verify_big_io 00:21:28.741 ************************************ 00:21:28.741 14:38:29 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.741 14:38:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:28.741 14:38:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.741 14:38:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:28.741 ************************************ 00:21:28.741 START TEST bdev_write_zeroes 00:21:28.741 ************************************ 00:21:28.741 14:38:29 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:28.741 [2024-11-20 14:38:29.663042] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:21:28.741 [2024-11-20 14:38:29.663250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91279 ] 00:21:28.999 [2024-11-20 14:38:29.849417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.999 [2024-11-20 14:38:29.978357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.566 Running I/O for 1 seconds... 00:21:30.501 21207.00 IOPS, 82.84 MiB/s 00:21:30.501 Latency(us) 00:21:30.501 [2024-11-20T14:38:31.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.501 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:30.501 raid5f : 1.01 21191.36 82.78 0.00 0.00 6017.36 1995.87 7804.74 00:21:30.501 [2024-11-20T14:38:31.558Z] =================================================================================================================== 00:21:30.501 [2024-11-20T14:38:31.558Z] Total : 21191.36 82.78 0.00 0.00 6017.36 1995.87 7804.74 00:21:31.877 00:21:31.877 real 0m3.238s 00:21:31.877 user 0m2.793s 00:21:31.877 sys 0m0.308s 00:21:31.877 14:38:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:31.877 ************************************ 00:21:31.877 END TEST bdev_write_zeroes 00:21:31.877 ************************************ 00:21:31.877 14:38:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:31.877 14:38:32 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:31.877 14:38:32 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:31.877 14:38:32 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:31.877 14:38:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:31.877 ************************************ 00:21:31.877 START TEST bdev_json_nonenclosed 00:21:31.877 ************************************ 00:21:31.877 14:38:32 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:32.135 [2024-11-20 14:38:32.952566] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:21:32.135 [2024-11-20 14:38:32.952775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91327 ] 00:21:32.135 [2024-11-20 14:38:33.136015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.393 [2024-11-20 14:38:33.255147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.393 [2024-11-20 14:38:33.255548] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:32.393 [2024-11-20 14:38:33.255767] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:32.393 [2024-11-20 14:38:33.255881] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:32.708 00:21:32.708 real 0m0.660s 00:21:32.708 user 0m0.403s 00:21:32.708 sys 0m0.151s 00:21:32.708 14:38:33 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.708 ************************************ 00:21:32.708 END TEST bdev_json_nonenclosed 00:21:32.708 ************************************ 00:21:32.708 14:38:33 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:32.708 14:38:33 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:32.708 14:38:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:32.708 14:38:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:32.708 14:38:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:32.708 ************************************ 00:21:32.708 START TEST bdev_json_nonarray 00:21:32.708 ************************************ 00:21:32.708 14:38:33 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:32.708 [2024-11-20 14:38:33.663138] Starting SPDK v25.01-pre git sha1 23429eed7 / DPDK 24.03.0 initialization... 00:21:32.708 [2024-11-20 14:38:33.663339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91358 ] 00:21:32.965 [2024-11-20 14:38:33.848502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.965 [2024-11-20 14:38:33.975816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.965 [2024-11-20 14:38:33.975972] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:32.965 [2024-11-20 14:38:33.976003] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:32.965 [2024-11-20 14:38:33.976046] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:33.223 00:21:33.223 real 0m0.682s 00:21:33.223 user 0m0.438s 00:21:33.223 sys 0m0.139s 00:21:33.223 14:38:34 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.223 ************************************ 00:21:33.223 END TEST bdev_json_nonarray 00:21:33.223 ************************************ 00:21:33.223 14:38:34 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:33.482 14:38:34 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:33.482 00:21:33.482 real 0m48.738s 00:21:33.482 user 1m6.321s 00:21:33.482 sys 0m5.215s 00:21:33.482 14:38:34 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.482 ************************************ 00:21:33.482 END TEST blockdev_raid5f 00:21:33.482 ************************************ 00:21:33.482 14:38:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 14:38:34 -- spdk/autotest.sh@194 -- # uname -s 00:21:33.482 14:38:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:33.482 14:38:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:33.482 14:38:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:33.482 14:38:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:33.482 14:38:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.482 14:38:34 -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 14:38:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:33.482 14:38:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:33.482 14:38:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:33.482 14:38:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:33.482 14:38:34 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:33.482 14:38:34 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:33.482 14:38:34 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:33.482 14:38:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.482 14:38:34 -- common/autotest_common.sh@10 -- # set +x 00:21:33.482 14:38:34 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:33.482 14:38:34 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:33.482 14:38:34 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:33.482 14:38:34 -- common/autotest_common.sh@10 -- # set +x 00:21:35.381 INFO: APP EXITING 00:21:35.381 INFO: killing all VMs 00:21:35.381 INFO: killing vhost app 00:21:35.381 INFO: EXIT DONE 00:21:35.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.381 Waiting for block devices as requested 00:21:35.381 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:35.639 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:36.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:36.572 Cleaning 00:21:36.572 Removing: /var/run/dpdk/spdk0/config 00:21:36.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:36.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:36.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:36.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:36.572 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:36.572 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:36.572 Removing: /dev/shm/spdk_tgt_trace.pid56863 00:21:36.572 Removing: /var/run/dpdk/spdk0 00:21:36.572 Removing: /var/run/dpdk/spdk_pid56628 00:21:36.572 Removing: /var/run/dpdk/spdk_pid56863 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57092 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57202 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57258 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57386 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57409 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57616 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57733 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57840 00:21:36.572 Removing: /var/run/dpdk/spdk_pid57962 00:21:36.572 Removing: /var/run/dpdk/spdk_pid58070 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58115 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58146 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58222 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58338 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58814 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58891 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58965 00:21:36.573 Removing: /var/run/dpdk/spdk_pid58992 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59143 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59165 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59319 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59336 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59406 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59429 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59499 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59517 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59718 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59754 00:21:36.573 Removing: /var/run/dpdk/spdk_pid59843 00:21:36.573 Removing: /var/run/dpdk/spdk_pid61221 00:21:36.573 Removing: /var/run/dpdk/spdk_pid61434 00:21:36.573 Removing: /var/run/dpdk/spdk_pid61585 00:21:36.573 Removing: /var/run/dpdk/spdk_pid62245 00:21:36.573 Removing: /var/run/dpdk/spdk_pid62457 00:21:36.573 Removing: /var/run/dpdk/spdk_pid62602 00:21:36.573 Removing: /var/run/dpdk/spdk_pid63262 00:21:36.573 Removing: /var/run/dpdk/spdk_pid63598 00:21:36.573 Removing: /var/run/dpdk/spdk_pid63743 00:21:36.573 Removing: /var/run/dpdk/spdk_pid65157 00:21:36.573 Removing: /var/run/dpdk/spdk_pid65410 00:21:36.573 Removing: /var/run/dpdk/spdk_pid65561 00:21:36.573 Removing: /var/run/dpdk/spdk_pid66974 00:21:36.573 Removing: /var/run/dpdk/spdk_pid67237 00:21:36.573 Removing: /var/run/dpdk/spdk_pid67384 00:21:36.573 Removing: /var/run/dpdk/spdk_pid68804 00:21:36.573 Removing: /var/run/dpdk/spdk_pid69254 00:21:36.573 Removing: /var/run/dpdk/spdk_pid69405 00:21:36.573 Removing: /var/run/dpdk/spdk_pid70917 00:21:36.573 Removing: /var/run/dpdk/spdk_pid71183 00:21:36.573 Removing: /var/run/dpdk/spdk_pid71334 00:21:36.573 Removing: /var/run/dpdk/spdk_pid72842 00:21:36.573 Removing: /var/run/dpdk/spdk_pid73112 00:21:36.573 Removing: /var/run/dpdk/spdk_pid73258 00:21:36.573 Removing: /var/run/dpdk/spdk_pid74772 00:21:36.573 Removing: /var/run/dpdk/spdk_pid75275 00:21:36.573 Removing: /var/run/dpdk/spdk_pid75422 00:21:36.573 Removing: /var/run/dpdk/spdk_pid75570 00:21:36.573 Removing: /var/run/dpdk/spdk_pid76018 00:21:36.573 Removing: /var/run/dpdk/spdk_pid76787 00:21:36.573 Removing: /var/run/dpdk/spdk_pid77173 00:21:36.573 Removing: /var/run/dpdk/spdk_pid77871 00:21:36.573 Removing: /var/run/dpdk/spdk_pid78359 00:21:36.573 Removing: /var/run/dpdk/spdk_pid79157 00:21:36.573 Removing: /var/run/dpdk/spdk_pid79577 00:21:36.573 Removing: /var/run/dpdk/spdk_pid81581 00:21:36.573 Removing: /var/run/dpdk/spdk_pid82038 00:21:36.573 Removing: /var/run/dpdk/spdk_pid82485 00:21:36.573 Removing: /var/run/dpdk/spdk_pid84610 00:21:36.573 Removing: /var/run/dpdk/spdk_pid85101 00:21:36.573 Removing: /var/run/dpdk/spdk_pid85609 00:21:36.573 Removing: /var/run/dpdk/spdk_pid86686 00:21:36.573 Removing: /var/run/dpdk/spdk_pid87014 00:21:36.573 Removing: /var/run/dpdk/spdk_pid87974 00:21:36.573 Removing: /var/run/dpdk/spdk_pid88302 00:21:36.573 Removing: /var/run/dpdk/spdk_pid89258 00:21:36.573 Removing: /var/run/dpdk/spdk_pid89592 00:21:36.573 Removing: /var/run/dpdk/spdk_pid90273 00:21:36.573 Removing: /var/run/dpdk/spdk_pid90549 00:21:36.573 Removing: /var/run/dpdk/spdk_pid90611 00:21:36.573 Removing: /var/run/dpdk/spdk_pid90655 00:21:36.573 Removing: /var/run/dpdk/spdk_pid90907 00:21:36.573 Removing: /var/run/dpdk/spdk_pid91082 00:21:36.573 Removing: /var/run/dpdk/spdk_pid91175 00:21:36.573 Removing: /var/run/dpdk/spdk_pid91279 00:21:36.573 Removing: /var/run/dpdk/spdk_pid91327 00:21:36.573 Removing: /var/run/dpdk/spdk_pid91358 00:21:36.573 Clean 00:21:36.832 14:38:37 -- common/autotest_common.sh@1453 -- # return 0 00:21:36.832 14:38:37 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:36.832 14:38:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.832 14:38:37 -- common/autotest_common.sh@10 -- # set +x 00:21:36.832 14:38:37 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:36.832 14:38:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.832 14:38:37 -- common/autotest_common.sh@10 -- # set +x 00:21:36.832 14:38:37 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:36.832 14:38:37 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:36.832 14:38:37 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:36.832 14:38:37 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:36.832 14:38:37 -- spdk/autotest.sh@398 -- # hostname 00:21:36.832 14:38:37 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:37.090 geninfo: WARNING: invalid characters removed from testname! 00:22:03.639 14:39:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:06.925 14:39:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:10.216 14:39:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:12.749 14:39:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:15.277 14:39:15 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:17.808 14:39:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:20.336 14:39:20 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:20.336 14:39:20 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:20.336 14:39:20 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:20.336 14:39:20 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:20.336 14:39:20 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:20.336 14:39:20 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:20.336 + [[ -n 5261 ]] 00:22:20.336 + sudo kill 5261 00:22:20.345 [Pipeline] } 00:22:20.360 [Pipeline] // timeout 00:22:20.366 [Pipeline] } 00:22:20.382 [Pipeline] // stage 00:22:20.387 [Pipeline] } 00:22:20.404 [Pipeline] // catchError 00:22:20.415 [Pipeline] stage 00:22:20.417 [Pipeline] { (Stop VM) 00:22:20.429 [Pipeline] sh 00:22:20.708 + vagrant halt 00:22:24.035 ==> default: Halting domain... 00:22:29.315 [Pipeline] sh 00:22:29.594 + vagrant destroy -f 00:22:32.884 ==> default: Removing domain... 00:22:32.895 [Pipeline] sh 00:22:33.174 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:22:33.183 [Pipeline] } 00:22:33.199 [Pipeline] // stage 00:22:33.206 [Pipeline] } 00:22:33.221 [Pipeline] // dir 00:22:33.226 [Pipeline] } 00:22:33.240 [Pipeline] // wrap 00:22:33.246 [Pipeline] } 00:22:33.259 [Pipeline] // catchError 00:22:33.268 [Pipeline] stage 00:22:33.270 [Pipeline] { (Epilogue) 00:22:33.284 [Pipeline] sh 00:22:33.565 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:38.841 [Pipeline] catchError 00:22:38.844 [Pipeline] { 00:22:38.857 [Pipeline] sh 00:22:39.139 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:39.396 Artifacts sizes are good 00:22:39.406 [Pipeline] } 00:22:39.421 [Pipeline] // catchError 00:22:39.431 [Pipeline] archiveArtifacts 00:22:39.437 Archiving artifacts 00:22:39.533 [Pipeline] cleanWs 00:22:39.543 [WS-CLEANUP] Deleting project workspace... 00:22:39.544 [WS-CLEANUP] Deferred wipeout is used... 00:22:39.549 [WS-CLEANUP] done 00:22:39.552 [Pipeline] } 00:22:39.568 [Pipeline] // stage 00:22:39.573 [Pipeline] } 00:22:39.591 [Pipeline] // node 00:22:39.596 [Pipeline] End of Pipeline 00:22:39.633 Finished: SUCCESS